WorldWideScience

Sample records for modelling approach similar

  1. Bianchi VI0 and III models: self-similar approach

    International Nuclear Information System (INIS)

    Belinchon, Jose Antonio

    2009-01-01

    We study several cosmological models with Bianchi VI 0 and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and Λ. As in other studied models we find that the behaviour of G and Λ are related. If G behaves as a growing time function then Λ is a positive decreasing time function but if G is decreasing then Λ 0 is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  2. Bianchi VI{sub 0} and III models: self-similar approach

    Energy Technology Data Exchange (ETDEWEB)

    Belinchon, Jose Antonio, E-mail: abelcal@ciccp.e [Departamento de Fisica, ETS Arquitectura, UPM, Av. Juan de Herrera 4, Madrid 28040 (Spain)

    2009-09-07

    We study several cosmological models with Bianchi VI{sub 0} and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and LAMBDA. As in other studied models we find that the behaviour of G and LAMBDA are related. If G behaves as a growing time function then LAMBDA is a positive decreasing time function but if G is decreasing then LAMBDA{sub 0} is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  3. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2018-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  4. Mathematical evaluation of similarity factor using various weighing approaches on aceclofenac marketed formulations by model-independent method.

    Science.gov (United States)

    Soni, T G; Desai, J U; Nagda, C D; Gandhi, T R; Chotai, N P

    2008-01-01

    The US Food and Drug Administration's (FDA's) guidance for industry on dissolution testing of immediate-release solid oral dosage forms describes that drug dissolution may be the rate limiting step for drug absorption in the case of low solubility/high permeability drugs (BCS class II drugs). US FDA Guidance describes the model-independent mathematical approach proposed by Moore and Flanner for calculating a similarity factor (f2) of dissolution across a suitable time interval. In the present study, the similarity factor was calculated on dissolution data of two marketed aceclofenac tablets (a BCS class II drug) using various weighing approaches proposed by Gohel et al. The proposed approaches were compared with a conventional approach (W = 1). On the basis of consideration of variability, preference is given in the order of approach 3 > approach 2 > approach 1 as approach 3 considers batch-to-batch as well as within-samples variability and shows best similarity profile. Approach 2 considers batch-to batch variability with higher specificity than approach 1.

  5. A Model-Based Approach to Constructing Music Similarity Functions

    Directory of Open Access Journals (Sweden)

    Lamere Paul

    2007-01-01

    Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  6. A Model-Based Approach to Constructing Music Similarity Functions

    Science.gov (United States)

    West, Kris; Lamere, Paul

    2006-12-01

    Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  7. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  8. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  9. Notions of similarity for systems biology models.

    Science.gov (United States)

    Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knüpfer, Christian; Liebermeister, Wolfram; Waltemath, Dagmar

    2018-01-01

    Systems biology models are rapidly increasing in complexity, size and numbers. When building large models, researchers rely on software tools for the retrieval, comparison, combination and merging of models, as well as for version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of 'similarity' may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here we survey existing methods for the comparison of models, introduce quantitative measures for model similarity, and discuss potential applications of combined similarity measures. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on a combination of different model aspects. The six aspects that we define as potentially relevant for similarity are underlying encoding, references to biological entities, quantitative behaviour, qualitative behaviour, mathematical equations and parameters and network structure. We argue that future similarity measures will benefit from combining these model aspects in flexible, problem-specific ways to mimic users' intuition about model similarity, and to support complex model searches in databases. © The Author 2016. Published by Oxford University Press.

  10. Similarity and Modeling in Science and Engineering

    CERN Document Server

    Kuneš, Josef

    2012-01-01

    The present text sets itself in relief to other titles on the subject in that it addresses the means and methodologies versus a narrow specific-task oriented approach. Concepts and their developments which evolved to meet the changing needs of applications are addressed. This approach provides the reader with a general tool-box to apply to their specific needs. Two important tools are presented: dimensional analysis and the similarity analysis methods. The fundamental point of view, enabling one to sort all models, is that of information flux between a model and an original expressed by the similarity and abstraction. Each chapter includes original examples and ap-plications. In this respect, the models can be divided into several groups. The following models are dealt with separately by chapter; mathematical and physical models, physical analogues, deterministic, stochastic, and cybernetic computer models. The mathematical models are divided into asymptotic and phenomenological models. The phenomenological m...

  11. Distributional and Knowledge-Based Approaches for Computing Portuguese Word Similarity

    Directory of Open Access Journals (Sweden)

    Hugo Gonçalo Oliveira

    2018-02-01

    Full Text Available Identifying similar and related words is not only key in natural language understanding but also a suitable task for assessing the quality of computational resources that organise words and meanings of a language, compiled by different means. This paper, which aims to be a reference for those interested in computing word similarity in Portuguese, presents several approaches for this task and is motivated by the recent availability of state-of-the-art distributional models of Portuguese words, which add to several lexical knowledge bases (LKBs for this language, available for a longer time. The previous resources were exploited to answer word similarity tests, which also became recently available for Portuguese. We conclude that there are several valid approaches for this task, but not one that outperforms all the others in every single test. Distributional models seem to capture relatedness better, while LKBs are better suited for computing genuine similarity, but, in general, better results are obtained when knowledge from different sources is combined.

  12. Similar words analysis based on POS-CBOW language model

    Directory of Open Access Journals (Sweden)

    Dongru RUAN

    2015-10-01

    Full Text Available Similar words analysis is one of the important aspects in the field of natural language processing, and it has important research and application values in text classification, machine translation and information recommendation. Focusing on the features of Sina Weibo's short text, this paper presents a language model named as POS-CBOW, which is a kind of continuous bag-of-words language model with the filtering layer and part-of-speech tagging layer. The proposed approach can adjust the word vectors' similarity according to the cosine similarity and the word vectors' part-of-speech metrics. It can also filter those similar words set on the base of the statistical analysis model. The experimental result shows that the similar words analysis algorithm based on the proposed POS-CBOW language model is better than that based on the traditional CBOW language model.

  13. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  14. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-12-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  15. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-01-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  16. Self-similar cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Chao, W Z [Cambridge Univ. (UK). Dept. of Applied Mathematics and Theoretical Physics

    1981-07-01

    The kinematics and dynamics of self-similar cosmological models are discussed. The degrees of freedom of the solutions of Einstein's equations for different types of models are listed. The relation between kinematic quantities and the classifications of the self-similarity group is examined. All dust local rotational symmetry models have been found.

  17. Phishing Detection: Analysis of Visual Similarity Based Approaches

    Directory of Open Access Journals (Sweden)

    Ankit Kumar Jain

    2017-01-01

    Full Text Available Phishing is one of the major problems faced by cyber-world and leads to financial losses for both industries and individuals. Detection of phishing attack with high accuracy has always been a challenging issue. At present, visual similarities based techniques are very useful for detecting phishing websites efficiently. Phishing website looks very similar in appearance to its corresponding legitimate website to deceive users into believing that they are browsing the correct website. Visual similarity based phishing detection techniques utilise the feature set like text content, text format, HTML tags, Cascading Style Sheet (CSS, image, and so forth, to make the decision. These approaches compare the suspicious website with the corresponding legitimate website by using various features and if the similarity is greater than the predefined threshold value then it is declared phishing. This paper presents a comprehensive analysis of phishing attacks, their exploitation, some of the recent visual similarity based approaches for phishing detection, and its comparative study. Our survey provides a better understanding of the problem, current solution space, and scope of future research to deal with phishing attacks efficiently using visual similarity based approaches.

  18. Similarity and accuracy of mental models formed during nursing handovers: A concept mapping approach.

    Science.gov (United States)

    Drach-Zahavy, Anat; Broyer, Chaya; Dagan, Efrat

    2017-09-01

    Shared mental models are crucial for constructing mutual understanding of the patient's condition during a clinical handover. Yet, scant research, if any, has empirically explored mental models of the parties involved in a clinical handover. This study aimed to examine the similarities among mental models of incoming and outgoing nurses, and to test their accuracy by comparing them with mental models of expert nurses. A cross-sectional study, exploring nurses' mental models via the concept mapping technique. 40 clinical handovers. Data were collected via concept mapping of the incoming, outgoing, and expert nurses' mental models (total of 120 concept maps). Similarity and accuracy for concepts and associations indexes were calculated to compare the different maps. About one fifth of the concepts emerged in both outgoing and incoming nurses' concept maps (concept similarity=23%±10.6). Concept accuracy indexes were 35%±18.8 for incoming and 62%±19.6 for outgoing nurses' maps. Although incoming nurses absorbed fewer number of concepts and associations (23% and 12%, respectively), they partially closed the gap (35% and 22%, respectively) relative to expert nurses' maps. The correlations between concept similarities, and incoming as well as outgoing nurses' concept accuracy, were significant (r=0.43, p<0.01; r=0.68 p<0.01, respectively). Finally, in 90% of the maps, outgoing nurses added information concerning the processes enacted during the shift, beyond the expert nurses' gold standard. Two seemingly contradicting processes in the handover were identified. "Information loss", captured by the low similarity indexes among the mental models of incoming and outgoing nurses; and "information restoration", based on accuracy measures indexes among the mental models of the incoming nurses. Based on mental model theory, we propose possible explanations for these processes and derive implications for how to improve a clinical handover. Copyright © 2017 Elsevier Ltd. All

  19. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert

    2015-02-19

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  20. A Novel Approach to Semantic Similarity Measurement Based on a Weighted Concept Lattice: Exemplifying Geo-Information

    Directory of Open Access Journals (Sweden)

    Jia Xiao

    2017-11-01

    Full Text Available The measurement of semantic similarity has been widely recognized as having a fundamental and key role in information science and information systems. Although various models have been proposed to measure semantic similarity, these models are not able effectively to quantify the weights of relevant factors that impact on the judgement of semantic similarity, such as the attributes of concepts, application context, and concept hierarchy. In this paper, we propose a novel approach that comprehensively considers the effects of various factors on semantic similarity judgment, which we name semantic similarity measurement based on a weighted concept lattice (SSMWCL. A feature model and network model are integrated together in SSMWCL. Based on the feature model, the combined weight of each attribute of the concepts is calculated by merging its information entropy and inclusion-degree importance in a specific application context. By establishing the weighted concept lattice, the relative hierarchical depths of concepts for comparison are computed according to the principle of the network model. The integration of feature model and network model enables SSMWCL to take account of differences in concepts more comprehensively in semantic similarity measurement. Additionally, a workflow of SSMWCL is designed to demonstrate these procedures and a case study of geo-information is conducted to assess the approach.

  1. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert; Gruenberger, Michael; Gkoutos, Georgios V; Schofield, Paul N

    2015-01-01

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions

  2. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Directory of Open Access Journals (Sweden)

    Jin Dai

    2014-01-01

    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  3. A study of concept-based similarity approaches for recommending program examples

    Science.gov (United States)

    Hosseini, Roya; Brusilovsky, Peter

    2017-07-01

    This paper investigates a range of concept-based example recommendation approaches that we developed to provide example-based problem-solving support in the domain of programming. The goal of these approaches is to offer students a set of most relevant remedial examples when they have trouble solving a code comprehension problem where students examine a program code to determine its output or the final value of a variable. In this paper, we use the ideas of semantic-level similarity-based linking developed in the area of intelligent hypertext to generate examples for the given problem. To determine the best-performing approach, we explored two groups of similarity approaches for selecting examples: non-structural approaches focusing on examples that are similar to the problem in terms of concept coverage and structural approaches focusing on examples that are similar to the problem by the structure of the content. We also explored the value of personalized example recommendation based on student's knowledge levels and learning goal of the exercise. The paper presents concept-based similarity approaches that we developed, explains the data collection studies and reports the result of comparative analysis. The results of our analysis showed better ranking performance of the personalized structural variant of cosine similarity approach.

  4. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    Science.gov (United States)

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    Background Physicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions.. Objective The aim is to summarize and review published studies describing computer-based approaches for predicting patients’ future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research. Methods The method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results. Results After duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods. Conclusions Interest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health

  5. Embedding Term Similarity and Inverse Document Frequency into a Logical Model of Information Retrieval.

    Science.gov (United States)

    Losada, David E.; Barreiro, Alvaro

    2003-01-01

    Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

  6. Vere-Jones' self-similar branching model

    International Nuclear Information System (INIS)

    Saichev, A.; Sornette, D.

    2005-01-01

    Motivated by its potential application to earthquake statistics as well as for its intrinsic interest in the theory of branching processes, we study the exactly self-similar branching process introduced recently by Vere-Jones. This model extends the ETAS class of conditional self-excited branching point-processes of triggered seismicity by removing the problematic need for a minimum (as well as maximum) earthquake size. To make the theory convergent without the need for the usual ultraviolet and infrared cutoffs, the distribution of magnitudes m ' of daughters of first-generation of a mother of magnitude m has two branches m ' ' >m with exponent β+d, where β and d are two positive parameters. We investigate the condition and nature of the subcritical, critical, and supercritical regime in this and in an extended version interpolating smoothly between several models. We predict that the distribution of magnitudes of events triggered by a mother of magnitude m over all generations has also two branches m ' ' >m with exponent β+h, with h=d√(1-s), where s is the fraction of triggered events. This corresponds to a renormalization of the exponent d into h by the hierarchy of successive generations of triggered events. For a significant part of the parameter space, the distribution of magnitudes over a full catalog summed over an average steady flow of spontaneous sources (immigrants) reproduces the distribution of the spontaneous sources with a single branch and is blind to the exponents β,d of the distribution of triggered events. Since the distribution of earthquake magnitudes is usually obtained with catalogs including many sequences, we conclude that the two branches of the distribution of aftershocks are not directly observable and the model is compatible with real seismic catalogs. In summary, the exactly self-similar Vere-Jones model provides an attractive new approach to model triggered seismicity, which alleviates delicate questions on the role of

  7. On self-similar Tolman models

    International Nuclear Information System (INIS)

    Maharaj, S.D.

    1988-01-01

    The self-similar spherically symmetric solutions of the Einstein field equation for the case of dust are identified. These form a subclass of the Tolman models. These self-similar models contain the solution recently presented by Chi [J. Math. Phys. 28, 1539 (1987)], thereby refuting the claim of having found a new solution to the Einstein field equations

  8. Lagrangian-similarity diffusion-deposition model

    International Nuclear Information System (INIS)

    Horst, T.W.

    1979-01-01

    A Lagrangian-similarity diffusion model has been incorporated into the surface-depletion deposition model. This model predicts vertical concentration profiles far downwind of the source that agree with those of a one-dimensional gradient-transfer model

  9. Using a Similarity Matrix Approach to Evaluate the Accuracy of Rescaled Maps

    Directory of Open Access Journals (Sweden)

    Peijun Sun

    2018-03-01

    Full Text Available Rescaled maps have been extensively utilized to provide data at the appropriate spatial resolution for use in various Earth science models. However, a simple and easy way to evaluate these rescaled maps has not been developed. We propose a similarity matrix approach using a contingency table to compute three measures: overall similarity (OS, omission error (OE, and commission error (CE to evaluate the rescaled maps. The Majority Rule Based aggregation (MRB method was employed to produce the upscaled maps to demonstrate this approach. In addition, previously created, coarser resolution land cover maps from other research projects were also available for comparison. The question of which is better, a map initially produced at coarse resolution or a fine resolution map rescaled to a coarse resolution, has not been quantitatively investigated. To address these issues, we selected study sites at three different extent levels. First, we selected twelve regions covering the continental USA, then we selected nine states (from the whole continental USA, and finally we selected nine Agriculture Statistical Districts (ASDs (from within the nine selected states as study sites. Crop/non-crop maps derived from the USDA Crop Data Layer (CDL at 30 m as base maps were used for the upscaling and existing maps at 250 m and 1 km were utilized for the comparison. The results showed that a similarity matrix can effectively provide the map user with the information needed to assess the rescaling. Additionally, the upscaled maps can provide higher accuracy and better represent landscape pattern compared to the existing coarser maps. Therefore, we strongly recommend that an evaluation of the upscaled map and the existing coarser resolution map using a similarity matrix should be conducted before deciding which dataset to use for the modelling. Overall, extending our understanding on how to perform an evaluation of the rescaled map and investigation of the applicability

  10. a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds

    Science.gov (United States)

    He, H.; Khoshelham, K.; Fraser, C.

    2017-09-01

    Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  11. A Novel Hybrid Similarity Calculation Model

    Directory of Open Access Journals (Sweden)

    Xiaoping Fan

    2017-01-01

    Full Text Available This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.

  12. Extending the similarity-based XML multicast approach with digital signatures

    DEFF Research Database (Denmark)

    Azzini, Antonia; Marrara, Stefania; Jensen, Meiko

    2009-01-01

    This paper investigates the interplay between similarity-based SOAP message aggregation and digital signature application. An overview on the approaches resulting from the different orders for the tasks of signature application, verification, similarity aggregation and splitting is provided....... Depending on the intersection between similarity-aggregated and signed SOAP message parts, the paper discusses three different cases of signature application, and sketches their applicability and performance implications....

  13. Identifying Similarities in Cognitive Subtest Functional Requirements: An Empirical Approach

    Science.gov (United States)

    Frisby, Craig L.; Parkin, Jason R.

    2007-01-01

    In the cognitive test interpretation literature, a Rational/Intuitive, Indirect Empirical, or Combined approach is typically used to construct conceptual taxonomies of the functional (behavioral) similarities between subtests. To address shortcomings of these approaches, the functional requirements for 49 subtests from six individually…

  14. A TWO-STEP CLASSIFICATION APPROACH TO DISTINGUISHING SIMILAR OBJECTS IN MOBILE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    H. He

    2017-09-01

    Full Text Available Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  15. In-Medium Similarity Renormalization Group Approach to the Nuclear Many-Body Problem

    Science.gov (United States)

    Hergert, Heiko; Bogner, Scott K.; Lietz, Justin G.; Morris, Titus D.; Novario, Samuel J.; Parzuchowski, Nathan M.; Yuan, Fei

    We present a pedagogical discussion of Similarity Renormalization Group (SRG) methods, in particular the In-Medium SRG (IMSRG) approach for solving the nuclear many-body problem. These methods use continuous unitary transformations to evolve the nuclear Hamiltonian to a desired shape. The IMSRG, in particular, is used to decouple the ground state from all excitations and solve the many-body Schrödinger equation. We discuss the IMSRG formalism as well as its numerical implementation, and use the method to study the pairing model and infinite neutron matter. We compare our results with those of Coupled cluster theory (Chap. 8), Configuration-Interaction Monte Carlo (Chap. 9), and the Self-Consistent Green's Function approach discussed in Chap. 11 The chapter concludes with an expanded overview of current research directions, and a look ahead at upcoming developments.

  16. A Quantum Approach to Subset-Sum and Similar Problems

    OpenAIRE

    Daskin, Ammar

    2017-01-01

    In this paper, we study the subset-sum problem by using a quantum heuristic approach similar to the verification circuit of quantum Arthur-Merlin games. Under described certain assumptions, we show that the exact solution of the subset sum problem my be obtained in polynomial time and the exponential speed-up over the classical algorithms may be possible. We give a numerical example and discuss the complexity of the approach and its further application to the knapsack problem.

  17. Novel Agent Based-approach for Industrial Diagnosis: A Combined use Between Case-based Reasoning and Similarity Measure

    Directory of Open Access Journals (Sweden)

    Fatima Zohra Benkaddour

    2016-12-01

    Full Text Available In spunlace nonwovens industry, the maintenance task is very complex, it requires experts and operators collaboration. In this paper, we propose a new approach integrating an agent- based modelling with case-based reasoning that utilizes similarity measures and preferences module. The main purpose of our study is to compare and evaluate the most suitable similarity measure for our case. Furthermore, operators that are usually geographically dispersed, have to collaborate and negotiate to achieve mutual agreements, especially when their proposals (diagnosis lead to a conflicting situation. The experimentation shows that the suggested agent-based approach is very interesting and efficient for operators and experts who collaborate in INOTIS enterprise.

  18. Towards Modelling Variation in Music as Foundation for Similarity

    NARCIS (Netherlands)

    Volk, A.; de Haas, W.B.; van Kranenburg, P.; Cambouropoulos, E.; Tsougras, C.; Mavromatis, P.; Pastiadis, K.

    2012-01-01

    This paper investigates the concept of variation in music from the perspective of music similarity. Music similarity is a central concept in Music Information Retrieval (MIR), however there exists no comprehensive approach to music similarity yet. As a consequence, MIR faces the challenge on how to

  19. An approach to large scale identification of non-obvious structural similarities between proteins

    Science.gov (United States)

    Cherkasov, Artem; Jones, Steven JM

    2004-01-01

    Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence. PMID:15147578

  20. An approach to large scale identification of non-obvious structural similarities between proteins

    Directory of Open Access Journals (Sweden)

    Cherkasov Artem

    2004-05-01

    Full Text Available Abstract Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence.

  1. Quality assessment of protein model-structures based on structural and functional similarities.

    Science.gov (United States)

    Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata

    2012-09-21

    Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and

  2. Examining Similarity Structure: Multidimensional Scaling and Related Approaches in Neuroimaging

    Directory of Open Access Journals (Sweden)

    Svetlana V. Shinkareva

    2013-01-01

    Full Text Available This paper covers similarity analyses, a subset of multivariate pattern analysis techniques that are based on similarity spaces defined by multivariate patterns. These techniques offer several advantages and complement other methods for brain data analyses, as they allow for comparison of representational structure across individuals, brain regions, and data acquisition methods. Particular attention is paid to multidimensional scaling and related approaches that yield spatial representations or provide methods for characterizing individual differences. We highlight unique contributions of these methods by reviewing recent applications to functional magnetic resonance imaging data and emphasize areas of caution in applying and interpreting similarity analysis methods.

  3. Hydrological similarity approach and rainfall satellite utilization for mini hydro power dam basic design (case study on the ungauged catchment at West Borneo, Indonesia)

    Science.gov (United States)

    Prakoso, W. G.; Murtilaksono, K.; Tarigan, S. D.; Purwanto, Y. J.

    2018-05-01

    An approach on flow duration and flood design estimation on the ungauged catchment with no rainfall and discharge data availability was been being develop with hydrological modelling including rainfall run off model implemented with watershed characteristic dataset. Near real time Rainfall data from multi satellite platform e.g. TRMM can be utilized for regionalization approach on the ungauged catchment. Watershed hydrologically similarity analysis were conducted including all of the major watershed in Borneo which was predicted to be similar with the Nanga Raun Watershed. It was found that a satisfactory hydrological model calibration could be achieved using catchment weighted time series of TRMM daily rainfall data, performed on nearby catchment deemed to be sufficiently similar to Nanga Raun catchment in hydrological terms. Based on this calibration, rainfall runoff parameters were then transferred to a model. Relatively reliable flow duration curve and extreme discharge value estimation were produced with reasonable several limitation. Further approach may be performed in order to deal with the primary limitations inherent in the hydrological and statistical analysis, especially to give prolongation to the availability of the rainfall and climate data with some novel approach like downscaling of global climate model.

  4. Uncovering highly obfuscated plagiarism cases using fuzzy semantic-based similarity model

    Directory of Open Access Journals (Sweden)

    Salha M. Alzahrani

    2015-07-01

    Full Text Available Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and suspicious texts of short lengths, which implement the semantic relatedness between words as a membership function to a fuzzy set. In order to minimize the number of false positives and false negatives, a learning method that combines a permission threshold and a variation threshold is used to decide true plagiarism cases. The proposed model and the baselines are evaluated on 99,033 ground-truth annotated cases extracted from different datasets, including 11,621 (11.7% handmade paraphrases, 54,815 (55.4% artificial plagiarism cases, and 32,578 (32.9% plagiarism-free cases. We conduct extensive experimental verifications, including the study of the effects of different segmentations schemes and parameter settings. Results are assessed using precision, recall, F-measure and granularity on stratified 10-fold cross-validation data. The statistical analysis using paired t-tests shows that the proposed approach is statistically significant in comparison with the baselines, which demonstrates the competence of fuzzy semantic-based model to detect plagiarism cases beyond the literal plagiarism. Additionally, the analysis of variance (ANOVA statistical test shows the effectiveness of different segmentation schemes used with the proposed approach.

  5. Self-Similar Symmetry Model and Cosmic Microwave Background

    Directory of Open Access Journals (Sweden)

    Tomohide eSonoda

    2016-05-01

    Full Text Available In this paper, we present the self-similar symmetry (SSS model that describes the hierarchical structure of the universe. The model is based on the concept of self-similarity, which explains the symmetry of the cosmic microwave background (CMB. The approximate length and time scales of the six hierarchies of the universe---grand unification, electroweak unification, the atom, the pulsar, the solar system, and the galactic system---are derived from the SSS model. In addition, the model implies that the electron mass and gravitational constant could vary with the CMB radiation temperature.

  6. A robust quantitative near infrared modeling approach for blend monitoring.

    Science.gov (United States)

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  7. MAPPING THE SIMILARITIES OF SPECTRA: GLOBAL AND LOCALLY-BIASED APPROACHES TO SDSS GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Lawlor, David [Statistical and Applied Mathematical Sciences Institute (United States); Budavári, Tamás [Dept. of Applied Mathematics and Statistics, The Johns Hopkins University (United States); Mahoney, Michael W. [International Computer Science Institute (United States)

    2016-12-10

    We present a novel approach to studying the diversity of galaxies. It is based on a novel spectral graph technique, that of locally-biased semi-supervised eigenvectors . Our method introduces new coordinates that summarize an entire spectrum, similar to but going well beyond the widely used Principal Component Analysis (PCA). Unlike PCA, however, this technique does not assume that the Euclidean distance between galaxy spectra is a good global measure of similarity. Instead, we relax that condition to only the most similar spectra, and we show that doing so yields more reliable results for many astronomical questions of interest. The global variant of our approach can identify very finely numerous astronomical phenomena of interest. The locally-biased variants of our basic approach enable us to explore subtle trends around a set of chosen objects. The power of the method is demonstrated in the Sloan Digital Sky Survey Main Galaxy Sample, by illustrating that the derived spectral coordinates carry an unprecedented amount of information.

  8. Mapping the Similarities of Spectra: Global and Locally-biased Approaches to SDSS Galaxies

    Science.gov (United States)

    Lawlor, David; Budavári, Tamás; Mahoney, Michael W.

    2016-12-01

    We present a novel approach to studying the diversity of galaxies. It is based on a novel spectral graph technique, that of locally-biased semi-supervised eigenvectors. Our method introduces new coordinates that summarize an entire spectrum, similar to but going well beyond the widely used Principal Component Analysis (PCA). Unlike PCA, however, this technique does not assume that the Euclidean distance between galaxy spectra is a good global measure of similarity. Instead, we relax that condition to only the most similar spectra, and we show that doing so yields more reliable results for many astronomical questions of interest. The global variant of our approach can identify very finely numerous astronomical phenomena of interest. The locally-biased variants of our basic approach enable us to explore subtle trends around a set of chosen objects. The power of the method is demonstrated in the Sloan Digital Sky Survey Main Galaxy Sample, by illustrating that the derived spectral coordinates carry an unprecedented amount of information.

  9. MAPPING THE SIMILARITIES OF SPECTRA: GLOBAL AND LOCALLY-BIASED APPROACHES TO SDSS GALAXIES

    International Nuclear Information System (INIS)

    Lawlor, David; Budavári, Tamás; Mahoney, Michael W.

    2016-01-01

    We present a novel approach to studying the diversity of galaxies. It is based on a novel spectral graph technique, that of locally-biased semi-supervised eigenvectors . Our method introduces new coordinates that summarize an entire spectrum, similar to but going well beyond the widely used Principal Component Analysis (PCA). Unlike PCA, however, this technique does not assume that the Euclidean distance between galaxy spectra is a good global measure of similarity. Instead, we relax that condition to only the most similar spectra, and we show that doing so yields more reliable results for many astronomical questions of interest. The global variant of our approach can identify very finely numerous astronomical phenomena of interest. The locally-biased variants of our basic approach enable us to explore subtle trends around a set of chosen objects. The power of the method is demonstrated in the Sloan Digital Sky Survey Main Galaxy Sample, by illustrating that the derived spectral coordinates carry an unprecedented amount of information.

  10. A new modelling approach for zooplankton behaviour

    Science.gov (United States)

    Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.

    We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.

  11. Non-frontal Model Based Approach to Forensic Face Recognition

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance

  12. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  13. Perception of similarity: a model for social network dynamics

    International Nuclear Information System (INIS)

    Javarone, Marco Alberto; Armano, Giuliano

    2013-01-01

    Some properties of social networks (e.g., the mixing patterns and the community structure) appear deeply influenced by the individual perception of people. In this work we map behaviors by considering similarity and popularity of people, also assuming that each person has his/her proper perception and interpretation of similarity. Although investigated in different ways (depending on the specific scientific framework), from a computational perspective similarity is typically calculated as a distance measure. In accordance with this view, to represent social network dynamics we developed an agent-based model on top of a hyperbolic space on which individual distance measures are calculated. Simulations, performed in accordance with the proposed model, generate small-world networks that exhibit a community structure. We deem this model to be valuable for analyzing the relevant properties of real social networks. (paper)

  14. Self-similar two-particle separation model

    DEFF Research Database (Denmark)

    Lüthi, Beat; Berg, Jacob; Ott, Søren

    2007-01-01

    .g.; in the inertial range as epsilon−1/3r2/3. Particle separation is modeled as a Gaussian process without invoking information of Eulerian acceleration statistics or of precise shapes of Eulerian velocity distribution functions. The time scale is a function of S2(r) and thus of the Lagrangian evolving separation......We present a new stochastic model for relative two-particle separation in turbulence. Inspired by material line stretching, we suggest that a similar process also occurs beyond the viscous range, with time scaling according to the longitudinal second-order structure function S2(r), e....... The model predictions agree with numerical and experimental results for various initial particle separations. We present model results for fixed time and fixed scale statistics. We find that for the Richardson-Obukhov law, i.e., =gepsilont3, to hold and to also be observed in experiments, high Reynolds...

  15. Simulation and similarity using models to understand the world

    CERN Document Server

    Weisberg, Michael

    2013-01-01

    In the 1950s, John Reber convinced many Californians that the best way to solve the state's water shortage problem was to dam up the San Francisco Bay. Against massive political pressure, Reber's opponents persuaded lawmakers that doing so would lead to disaster. They did this not by empirical measurement alone, but also through the construction of a model. Simulation and Similarity explains why this was a good strategy while simultaneously providing an account of modeling and idealization in modern scientific practice. Michael Weisberg focuses on concrete, mathematical, and computational models in his consideration of the nature of models, the practice of modeling, and nature of the relationship between models and real-world phenomena. In addition to a careful analysis of physical, computational, and mathematical models, Simulation and Similarity offers a novel account of the model/world relationship. Breaking with the dominant tradition, which favors the analysis of this relation through logical notions suc...

  16. Idealness and similarity in goal-derived categories: a computational examination.

    Science.gov (United States)

    Voorspoels, Wouter; Storms, Gert; Vanpaemel, Wolf

    2013-02-01

    The finding that the typicality gradient in goal-derived categories is mainly driven by ideals rather than by exemplar similarity has stood uncontested for nearly three decades. Due to the rather rigid earlier implementations of similarity, a key question has remained--that is, whether a more flexible approach to similarity would alter the conclusions. In the present study, we evaluated whether a similarity-based approach that allows for dimensional weighting could account for findings in goal-derived categories. To this end, we compared a computational model of exemplar similarity (the generalized context model; Nosofsky, Journal of Experimental Psychology. General 115:39-57, 1986) and a computational model of ideal representation (the ideal-dimension model; Voorspoels, Vanpaemel, & Storms, Psychonomic Bulletin & Review 18:1006-114, 2011) in their accounts of exemplar typicality in ten goal-derived categories. In terms of both goodness-of-fit and generalizability, we found strong evidence for an ideal approach in nearly all categories. We conclude that focusing on a limited set of features is necessary but not sufficient to account for the observed typicality gradient. A second aspect of ideal representations--that is, that extreme rather than common, central-tendency values drive typicality--seems to be crucial.

  17. Modeling Timbre Similarity of Short Music Clips.

    Science.gov (United States)

    Siedenburg, Kai; Müllensiefen, Daniel

    2017-01-01

    There is evidence from a number of recent studies that most listeners are able to extract information related to song identity, emotion, or genre from music excerpts with durations in the range of tenths of seconds. Because of these very short durations, timbre as a multifaceted auditory attribute appears as a plausible candidate for the type of features that listeners make use of when processing short music excerpts. However, the importance of timbre in listening tasks that involve short excerpts has not yet been demonstrated empirically. Hence, the goal of this study was to develop a method that allows to explore to what degree similarity judgments of short music clips can be modeled with low-level acoustic features related to timbre. We utilized the similarity data from two large samples of participants: Sample I was obtained via an online survey, used 16 clips of 400 ms length, and contained responses of 137,339 participants. Sample II was collected in a lab environment, used 16 clips of 800 ms length, and contained responses from 648 participants. Our model used two sets of audio features which included commonly used timbre descriptors and the well-known Mel-frequency cepstral coefficients as well as their temporal derivates. In order to predict pairwise similarities, the resulting distances between clips in terms of their audio features were used as predictor variables with partial least-squares regression. We found that a sparse selection of three to seven features from both descriptor sets-mainly encoding the coarse shape of the spectrum as well as spectrotemporal variability-best predicted similarities across the two sets of sounds. Notably, the inclusion of non-acoustic predictors of musical genre and record release date allowed much better generalization performance and explained up to 50% of shared variance ( R 2 ) between observations and model predictions. Overall, the results of this study empirically demonstrate that both acoustic features related

  18. Regulatory challenges and approaches to characterize nanomedicines and their follow-on similars.

    Science.gov (United States)

    Mühlebach, Stefan; Borchard, Gerrit; Yildiz, Selcan

    2015-03-01

    Nanomedicines are highly complex products and are the result of difficult to control manufacturing processes. Nonbiological complex drugs and their biological counterparts can comprise nanoparticles and therefore show nanomedicine characteristics. They consist of not fully known nonhomomolecular structures, and can therefore not be characterized by physicochemical means only. Also, intended copies of nanomedicines (follow-on similars) may have clinically meaningful differences, creating the regulatory challenge of how to grant a high degree of assurance for patients' benefit and safety. As an example, the current regulatory approach for marketing authorization of intended copies of nonbiological complex drugs appears inappropriate; also, a valid strategy incorporating the complexity of such systems is undefined. To demonstrate sufficient similarity and comparability, a stepwise quality, nonclinical and clinical approach is necessary to obtain market authorization for follow-on products as therapeutic alternatives, substitution and/or interchangeable products. To fill the regulatory gap, harmonized and science-based standards are needed.

  19. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  20. An Efficient Similarity Digests Database Lookup - A Logarithmic Divide & Conquer Approach

    Directory of Open Access Journals (Sweden)

    Frank Breitinger

    2014-09-01

    Full Text Available Investigating seized devices within digital forensics represents a challenging task due to the increasing amount of data. Common procedures utilize automated file identification, which reduces the amount of data an investigator has to examine manually. In the past years the research field of approximate matching arises to detect similar data. However, if n denotes the number of similarity digests in a database, then the lookup for a single similarity digest is of complexity of O(n. This paper presents a concept to extend existing approximate matching algorithms, which reduces the lookup complexity from O(n to O(log(n. Our proposed approach is based on the well-known divide and conquer paradigm and builds a Bloom filter-based tree data structure in order to enable an efficient lookup of similarity digests. Further, it is demonstrated that the presented technique is highly scalable operating a trade-off between storage requirements and computational efficiency. We perform a theoretical assessment based on recently published results and reasonable magnitudes of input data, and show that the complexity reduction achieved by the proposed technique yields a 220-fold acceleration of look-up costs.

  1. Meeting your match: How attractiveness similarity affects approach behavior in mixed-sex dyads

    NARCIS (Netherlands)

    Straaten, I. van; Engels, R.C.M.E.; Finkenauer, C.; Holland, R.W.

    2009-01-01

    This experimental study investigated approach behavior toward opposite-sex others of similar versus dissimilar physical attractiveness. Furthermore, it tested the moderating effects of sex. Single participants interacted with confederates of high and low attractiveness. Observers rated their

  2. Multi-scale structural similarity index for motion detection

    Directory of Open Access Journals (Sweden)

    M. Abdel-Salam Nasr

    2017-07-01

    Full Text Available The most recent approach for measuring the image quality is the structural similarity index (SSI. This paper presents a novel algorithm based on the multi-scale structural similarity index for motion detection (MS-SSIM in videos. The MS-SSIM approach is based on modeling of image luminance, contrast and structure at multiple scales. The MS-SSIM has resulted in much better performance than the single scale SSI approach but at the cost of relatively lower processing speed. The major advantages of the presented algorithm are both: the higher detection accuracy and the quasi real-time processing speed.

  3. APPLICABILITY OF SIMILARITY CONDITIONS TO ANALOGUE MODELLING OF TECTONIC STRUCTURES

    Directory of Open Access Journals (Sweden)

    Mikhail A. Goncharov

    2010-01-01

    Full Text Available The publication is aimed at comparing concepts of V.V. Belousov and M.V. Gzovsky, outstanding researchers who established fundamentals of tectonophysics in Russia, specifically similarity conditions in application to tectonophysical modeling. Quotations from their publications illustrate differences in their views. In this respect, we can reckon V.V. Belousov as a «realist» as he supported «the liberal point of view» [Methods of modelling…, 1988, p. 21–22], whereas M.V. Gzovsky can be regarded as an «idealist» as he believed that similarity conditions should be mandatorily applied to ensure correctness of physical modeling of tectonic deformations and structures [Gzovsky, 1975, pp. 88 and 94].Objectives of the present publication are (1 to be another reminder about desirability of compliance with similarity conditions in experimental tectonics; (2 to point out difficulties in ensuring such compliance; (3 to give examples which bring out the fact that similarity conditions are often met per se, i.e. automatically observed; (4 to show that modeling can be simplified in some cases without compromising quantitative estimations of parameters of structure formation.(1 Physical modelling of tectonic deformations and structures should be conducted, if possible, in compliance with conditions of geometric and physical similarity between experimental models and corresponding natural objects. In any case, a researcher should have a clear vision of conditions applicable to each particular experiment.(2 Application of similarity conditions is often challenging due to unavoidable difficulties caused by the following: a Imperfection of experimental equipment and technologies (Fig. 1 to 3; b uncertainties in estimating parameters of formation of natural structures, including main ones: structure size (Fig. 4, time of formation (Fig. 5, deformation properties of the medium wherein such structures are formed, including, first of all, viscosity (Fig. 6

  4. Meeting your match: how attractiveness similarity affects approach behavior in mixed-sex dyads.

    Science.gov (United States)

    van Straaten, Ischa; Engels, Rutger C M E; Finkenauer, Catrin; Holland, Rob W

    2009-06-01

    This experimental study investigated approach behavior toward opposite-sex others of similar versus dissimilar physical attractiveness. Furthermore, it tested the moderating effects of sex. Single participants interacted with confederates of high and low attractiveness. Observers rated their behavior in terms of relational investment (i.e., behavioral efforts related to the improvement of interaction fluency, communication of positive interpersonal affect, and positive self-presentation). As expected, men displayed more relational investment behavior if their own physical attractiveness was similar to that of the confederate. For women, no effects of attractiveness similarity on relational investment behavior were found. Results are discussed in the light of positive assortative mating, preferences for physically attractive mates, and sex differences in attraction-related interpersonal behaviors.

  5. A Markovian approach for modeling packet traffic with long range dependence

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis

    1998-01-01

    -state Markov modulated Poisson processes (MMPPs). We illustrate that a superposition of four two-state MMPPs suffices to model second-order self-similar behavior over several time scales. Our modeling approach allows us to fit to additional descriptors while maintaining the second-order behavior...

  6. The predictive model on the user reaction time using the information similarity

    International Nuclear Information System (INIS)

    Lee, Sung Jin; Heo, Gyun Young; Chang, Soon Heung

    2005-01-01

    Human performance is frequently degraded because people forget. Memory is one of brain processes that are important when trying to understand how people process information. Although a large number of studies have been made on the human performance, little is known about the similarity effect in human performance. The purpose of this paper is to propose and validate the quantitative and predictive model on the human response time in the user interface with the concept of similarity. However, it is not easy to explain the human performance with only similarity or information amount. We are confronted by two difficulties: making the quantitative model on the human response time with the similarity and validating the proposed model by experimental work. We made the quantitative model based on the Hick's law and the law of practice. In addition, we validated the model with various experimental conditions by measuring participants' response time in the environment of computer-based display. Experimental results reveal that the human performance is improved by the user interface's similarity. We think that the proposed model is useful for the user interface design and evaluation phases

  7. Numerical study of similarity in prototype and model pumped turbines

    International Nuclear Information System (INIS)

    Li, Z J; Wang, Z W; Bi, H L

    2014-01-01

    Similarity study of prototype and model pumped turbines are performed by numerical simulation and the partial discharge case is analysed in detail. It is found out that in the RSI (rotor-stator interaction) region where the flow is convectively accelerated with minor flow separation, a high level of similarity in flow patterns and pressure fluctuation appear with relative pressure fluctuation amplitude of model turbine slightly higher than that of prototype turbine. As for the condition in the runner where the flow is convectively accelerated with severe separation, similarity fades substantially due to different topology of flow separation and vortex formation brought by distinctive Reynolds numbers of the two turbines. In the draft tube where the flow is diffusively decelerated, similarity becomes debilitated owing to different vortex rope formation impacted by Reynolds number. It is noted that the pressure fluctuation amplitude and characteristic frequency of model turbine are larger than those of prototype turbine. The differences in pressure fluctuation characteristics are discussed theoretically through dimensionless Navier-Stokes equation. The above conclusions are all made based on simulation without regard to the penstock response and resonance

  8. Inter Genre Similarity Modelling For Automatic Music Genre Classification

    OpenAIRE

    Bagci, Ulas; Erzin, Engin

    2009-01-01

    Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population....

  9. Brazilian N2 laser similar to imported models

    International Nuclear Information System (INIS)

    Santos, P.A.M. dos; Tavares Junior, A.D.; Silva Reis, H. da; Tagliaferri, A.A.; Massone, C.A.

    1981-09-01

    The development of a high power N 2 Laser, similar to imported models but built enterely with Brazilian materials is described. The prototype shows pulse repetitivity that varies from 1 to 50 per second and has a peak power of 500 kW. (Author) [pt

  10. Genome-Wide Expression Profiling of Five Mouse Models Identifies Similarities and Differences with Human Psoriasis

    Science.gov (United States)

    Swindell, William R.; Johnston, Andrew; Carbajal, Steve; Han, Gangwen; Wohn, Christian; Lu, Jun; Xing, Xianying; Nair, Rajan P.; Voorhees, John J.; Elder, James T.; Wang, Xiao-Jing; Sano, Shigetoshi; Prens, Errol P.; DiGiovanni, John; Pittelkow, Mark R.; Ward, Nicole L.; Gudjonsson, Johann E.

    2011-01-01

    Development of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features of the human disease, and standardized validation criteria for psoriasis mouse models have not been widely applied. In this study, whole-genome transcriptional profiling is used to compare gene expression patterns manifested by human psoriatic skin lesions with those that occur in five psoriasis mouse models (K5-Tie2, imiquimod, K14-AREG, K5-Stat3C and K5-TGFbeta1). While the cutaneous gene expression profiles associated with each mouse phenotype exhibited statistically significant similarity to the expression profile of psoriasis in humans, each model displayed distinctive sets of similarities and differences in comparison to human psoriasis. For all five models, correspondence to the human disease was strong with respect to genes involved in epidermal development and keratinization. Immune and inflammation-associated gene expression, in contrast, was more variable between models as compared to the human disease. These findings support the value of all five models as research tools, each with identifiable areas of convergence to and divergence from the human disease. Additionally, the approach used in this paper provides an objective and quantitative method for evaluation of proposed mouse models of psoriasis, which can be strategically applied in future studies to score strengths of mouse phenotypes relative to specific aspects of human psoriasis. PMID:21483750

  11. The continuous similarity model of bulk soil-water evaporation

    Science.gov (United States)

    Clapp, R. B.

    1983-01-01

    The continuous similarity model of evaporation is described. In it, evaporation is conceptualized as a two stage process. For an initially moist soil, evaporation is first climate limited, but later it becomes soil limited. During the latter stage, the evaporation rate is termed evaporability, and mathematically it is inversely proportional to the evaporation deficit. A functional approximation of the moisture distribution within the soil column is also included in the model. The model was tested using data from four experiments conducted near Phoenix, Arizona; and there was excellent agreement between the simulated and observed evaporation. The model also predicted the time of transition to the soil limited stage reasonably well. For one of the experiments, a third stage of evaporation, when vapor diffusion predominates, was observed. The occurrence of this stage was related to the decrease in moisture at the surface of the soil. The continuous similarity model does not account for vapor flow. The results show that climate, through the potential evaporation rate, has a strong influence on the time of transition to the soil limited stage. After this transition, however, bulk evaporation is independent of climate until the effects of vapor flow within the soil predominate.

  12. PubMed-supported clinical term weighting approach for improving inter-patient similarity measure in diagnosis prediction.

    Science.gov (United States)

    Chan, Lawrence Wc; Liu, Ying; Chan, Tao; Law, Helen Kw; Wong, S C Cesar; Yeung, Andy Ph; Lo, K F; Yeung, S W; Kwok, K Y; Chan, William Yl; Lau, Thomas Yh; Shyu, Chi-Ren

    2015-06-02

    Similarity-based retrieval of Electronic Health Records (EHRs) from large clinical information systems provides physicians the evidence support in making diagnoses or referring examinations for the suspected cases. Clinical Terms in EHRs represent high-level conceptual information and the similarity measure established based on these terms reflects the chance of inter-patient disease co-occurrence. The assumption that clinical terms are equally relevant to a disease is unrealistic, reducing the prediction accuracy. Here we propose a term weighting approach supported by PubMed search engine to address this issue. We collected and studied 112 abdominal computed tomography imaging examination reports from four hospitals in Hong Kong. Clinical terms, which are the image findings related to hepatocellular carcinoma (HCC), were extracted from the reports. Through two systematic PubMed search methods, the generic and specific term weightings were established by estimating the conditional probabilities of clinical terms given HCC. Each report was characterized by an ontological feature vector and there were totally 6216 vector pairs. We optimized the modified direction cosine (mDC) with respect to a regularization constant embedded into the feature vector. Equal, generic and specific term weighting approaches were applied to measure the similarity of each pair and their performances for predicting inter-patient co-occurrence of HCC diagnoses were compared by using Receiver Operating Characteristics (ROC) analysis. The Areas under the curves (AUROCs) of similarity scores based on equal, generic and specific term weighting approaches were 0.735, 0.728 and 0.743 respectively (p PubMed. Our findings suggest that the optimized similarity measure with specific term weighting to EHRs can improve significantly the accuracy for predicting the inter-patient co-occurrence of diagnosis when compared with equal and generic term weighting approaches.

  13. Contextual Factors for Finding Similar Experts

    DEFF Research Database (Denmark)

    Hofmann, Katja; Balog, Krisztian; Bogers, Toine

    2010-01-01

    -seeking models, are rarely taken into account. In this article, we extend content-based expert-finding approaches with contextual factors that have been found to influence human expert finding. We focus on a task of science communicators in a knowledge-intensive environment, the task of finding similar experts......, given an example expert. Our approach combines expertise-seeking and retrieval research. First, we conduct a user study to identify contextual factors that may play a role in the studied task and environment. Then, we design expert retrieval models to capture these factors. We combine these with content......-based retrieval models and evaluate them in a retrieval experiment. Our main finding is that while content-based features are the most important, human participants also take contextual factors into account, such as media experience and organizational structure. We develop two principled ways of modeling...

  14. Biomarker- and similarity coefficient-based approaches to bacterial mixture characterization using matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS).

    Science.gov (United States)

    Zhang, Lin; Smart, Sonja; Sandrin, Todd R

    2015-11-05

    MALDI-TOF MS profiling has been shown to be a rapid and reliable method to characterize pure cultures of bacteria. Currently, there is keen interest in using this technique to identify bacteria in mixtures. Promising results have been reported with two- or three-isolate model systems using biomarker-based approaches. In this work, we applied MALDI-TOF MS-based methods to a more complex model mixture containing six bacteria. We employed: 1) a biomarker-based approach that has previously been shown to be useful in identification of individual bacteria in pure cultures and simple mixtures and 2) a similarity coefficient-based approach that is routinely and nearly exclusively applied to identification of individual bacteria in pure cultures. Both strategies were developed and evaluated using blind-coded mixtures. With regard to the biomarker-based approach, results showed that most peaks in mixture spectra could be assigned to those found in spectra of each component bacterium; however, peaks shared by two isolates as well as peaks that could not be assigned to any individual component isolate were observed. For two-isolate blind-coded samples, bacteria were correctly identified using both similarity coefficient- and biomarker-based strategies, while for blind-coded samples containing more than two isolates, bacteria were more effectively identified using a biomarker-based strategy.

  15. A behavioral similarity measure between labeled Petri nets based on principal transition sequences

    NARCIS (Netherlands)

    Wang, J.; He, T.; Wen, L.; Wu, N.; Hofstede, ter A.H.M.; Su, J.; Meersman, R.; Dillon, T.S.; Herrero, P.

    2010-01-01

    Being able to determine the degree of similarity between process models is important for management, reuse, and analysis of business process models. In this paper we propose a novel method to determine the degree of similarity between process models, which exploits their semantics. Our approach is

  16. METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Gorbenkova Elena Vladimirovna

    2017-10-01

    Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of

  17. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Science.gov (United States)

    2011-01-01

    Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model) can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing processes. A complete

  18. Towards Personalized Medicine: Leveraging Patient Similarity and Drug Similarity Analytics

    Science.gov (United States)

    Zhang, Ping; Wang, Fei; Hu, Jianying; Sorrentino, Robert

    2014-01-01

    The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance. PMID:25717413

  19. The empirical versus DSM-oriented approach of the child behavior checklist: Similarities and dissimilarities

    NARCIS (Netherlands)

    Wolff, M.S. de; Vogels, A.G.C.; Reijneveld, S.A.

    2014-01-01

    The DSM-oriented approach of the Child Behavior Checklist (CBCL) is a relatively new classification of problem behavior in children and adolescents. Given the clinical and scientific relevance of the CBCL, this study examines similarities and dissimilarities between the empirical and the

  20. The Empirical Versus DSM-Oriented Approach of the Child Behavior Checklist Similarities and Dissimilarities

    NARCIS (Netherlands)

    de Wolff, Marianne S.; Vogels, Anton G. C.; Reijneveld, Sijmen A.

    2014-01-01

    The DSM-oriented approach of the Child Behavior Checklist (CBCL) is a relatively new classification of problem behavior in children and adolescents. Given the clinical and scientific relevance of the CBCL, this study examines similarities and dissimilarities between the empirical and the

  1. Spatiao – Temporal Evaluation and Comparison of MM5 Model using Similarity Algorithm

    Directory of Open Access Journals (Sweden)

    N. Siabi

    2016-02-01

    Full Text Available Introduction temporal and spatial change of meteorological and environmental variables is very important. These changes can be predicted by numerical prediction models over time and in different locations and can be provided as spatial zoning maps with interpolation methods such as geostatistics (16, 6. But these maps are comparable to each other as visual, qualitative and univariate for a limited number of maps (15. To resolve this problem the similarity algorithm is used. This algorithm is a simultaneous comparison method to a large number of data (18. Numerical prediction models such as MM5 were used in different studies (10, 22, and 23. But a little research is done to compare the spatio-temporal similarity of the models with real data quantitatively. The purpose of this paper is to integrate geostatistical techniques with similarity algorithm to study the spatial and temporal MM5 model predicted results with real data. Materials and Methods The study area is north east of Iran. 55 to 61 degrees of longitude and latitude is 30 to 38 degrees. Monthly and annual temperature and precipitation actual data for the period of 1990-2010 was received from the Meteorological Agency and Department of Energy. MM5 Model Data, with a spatial resolution 0.5 × 0.5 degree were downloaded from the NASA website (5. GS+ and ArcGis software were used to produce each variable map. We used multivariate methods co-kriging and kriging with an external drift by applying topography and height as a secondary variable via implementing Digital Elevation Model. (6,12,14. Then the standardize and similarity algorithms (9,11 was applied by programming in MATLAB software to each map grid point. The spatial and temporal similarities between data collections and model results were obtained by F values. These values are between 0 and 0.5 where the value below 0.2 indicates good similarity and above 0.5 shows very poor similarity. The results were plotted on maps by MATLAB

  2. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  3. Vector-model-supported approach in prostate plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)

    2017-07-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  4. Morphological similarities between DBM and a microeconomic model of sprawl

    Science.gov (United States)

    Caruso, Geoffrey; Vuidel, Gilles; Cavailhès, Jean; Frankhauser, Pierre; Peeters, Dominique; Thomas, Isabelle

    2011-03-01

    We present a model that simulates the growth of a metropolitan area on a 2D lattice. The model is dynamic and based on microeconomics. Households show preferences for nearby open spaces and neighbourhood density. They compete on the land market. They travel along a road network to access the CBD. A planner ensures the connectedness and maintenance of the road network. The spatial pattern of houses, green spaces and road network self-organises, emerging from agents individualistic decisions. We perform several simulations and vary residential preferences. Our results show morphologies and transition phases that are similar to Dieletric Breakdown Models (DBM). Such similarities were observed earlier by other authors, but we show here that it can be deducted from the functioning of the land market and thus explicitly connected to urban economic theory.

  5. Mathematical approach for the assessment of similarity factor using a new scheme for calculating weight.

    Science.gov (United States)

    Gohel, M C; Sarvaiya, K G; Shah, A R; Brahmbhatt, B K

    2009-03-01

    The objective of the present work was to propose a method for calculating weight in the Moore and Flanner Equation. The percentage coefficient of variation in reference and test formulations at each time point was considered for calculating weight. The literature reported data are used to demonstrate applicability of the method. The advantages and applications of new approach are narrated. The results show a drop in the value of similarity factor as compared to the approach proposed in earlier work. The scientists who need high accuracy in calculation may use this approach.

  6. Detecting atypical examples of known domain types by sequence similarity searching: the SBASE domain library approach.

    Science.gov (United States)

    Dhir, Somdutta; Pacurar, Mircea; Franklin, Dino; Gáspári, Zoltán; Kertész-Farkas, Attila; Kocsor, András; Eisenhaber, Frank; Pongor, Sándor

    2010-11-01

    SBASE is a project initiated to detect known domain types and predicting domain architectures using sequence similarity searching (Simon et al., Protein Seq Data Anal, 5: 39-42, 1992, Pongor et al, Nucl. Acids. Res. 21:3111-3115, 1992). The current approach uses a curated collection of domain sequences - the SBASE domain library - and standard similarity search algorithms, followed by postprocessing which is based on a simple statistics of the domain similarity network (http://hydra.icgeb.trieste.it/sbase/). It is especially useful in detecting rare, atypical examples of known domain types which are sometimes missed even by more sophisticated methodologies. This approach does not require multiple alignment or machine learning techniques, and can be a useful complement to other domain detection methodologies. This article gives an overview of the project history as well as of the concepts and principles developed within this the project.

  7. Fuzzy Similarity and Fuzzy Inclusion Measures in Polyline Matching: A Case Study of Potential Streams Identification for Archaeological Modelling in GIS

    Science.gov (United States)

    Ďuračiová, Renata; Rášová, Alexandra; Lieskovský, Tibor

    2017-12-01

    When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.

  8. Fuzzy Similarity and Fuzzy Inclusion Measures in Polyline Matching: A Case Study of Potential Streams Identification for Archaeological Modelling in GIS

    Directory of Open Access Journals (Sweden)

    Ďuračiová Renata

    2017-12-01

    Full Text Available When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.

  9. Meeting your match: How attractiveness similarity affects approach behavior in mixed-sex dyads

    OpenAIRE

    van Straaten, I.; Engels, R.C.M.E.; Finkenauer, C.; Holland, R.W.

    2009-01-01

    This experimental study investigated approach behavior toward opposite-sex others of similar versus dissimilar physical attractiveness. Furthermore, it tested the moderating effects of sex. Single participants interacted with confederates of high and low attractiveness. Observers rated their behavior in terms of relational investment (i.e., behavioral efforts related to the improvement of interaction fluency, communication of positive interpersonal affect, and positive self-presentation). As ...

  10. A new k-epsilon model consistent with Monin-Obukhov similarity theory

    DEFF Research Database (Denmark)

    van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.

    2017-01-01

    A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...

  11. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  12. The semantic similarity ensemble

    Directory of Open Access Journals (Sweden)

    Andrea Ballatore

    2013-12-01

    Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.

  13. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  14. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Directory of Open Access Journals (Sweden)

    Freire Sergio M

    2011-10-01

    Full Text Available Abstract Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing

  15. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    Science.gov (United States)

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  16. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  17. New Genome Similarity Measures based on Conserved Gene Adjacencies.

    Science.gov (United States)

    Doerr, Daniel; Kowada, Luis Antonio B; Araujo, Eloi; Deshpande, Shachi; Dantas, Simone; Moret, Bernard M E; Stoye, Jens

    2017-06-01

    Many important questions in molecular biology, evolution, and biomedicine can be addressed by comparative genomic approaches. One of the basic tasks when comparing genomes is the definition of measures of similarity (or dissimilarity) between two genomes, for example, to elucidate the phylogenetic relationships between species. The power of different genome comparison methods varies with the underlying formal model of a genome. The simplest models impose the strong restriction that each genome under study must contain the same genes, each in exactly one copy. More realistic models allow several copies of a gene in a genome. One speaks of gene families, and comparative genomic methods that allow this kind of input are called gene family-based. The most powerful-but also most complex-models avoid this preprocessing of the input data and instead integrate the family assignment within the comparative analysis. Such methods are called gene family-free. In this article, we study an intermediate approach between family-based and family-free genomic similarity measures. Introducing this simpler model, called gene connections, we focus on the combinatorial aspects of gene family-free genome comparison. While in most cases, the computational costs to the general family-free case are the same, we also find an instance where the gene connections model has lower complexity. Within the gene connections model, we define three variants of genomic similarity measures that have different expression powers. We give polynomial-time algorithms for two of them, while we show NP-hardness for the third, most powerful one. We also generalize the measures and algorithms to make them more robust against recent local disruptions in gene order. Our theoretical findings are supported by experimental results, proving the applicability and performance of our newly defined similarity measures.

  18. Phoneme Similarity and Confusability

    Science.gov (United States)

    Bailey, T.M.; Hahn, U.

    2005-01-01

    Similarity between component speech sounds influences language processing in numerous ways. Explanation and detailed prediction of linguistic performance consequently requires an understanding of these basic similarities. The research reported in this paper contrasts two broad classes of approach to the issue of phoneme similarity-theoretically…

  19. Information loss method to measure node similarity in networks

    Science.gov (United States)

    Li, Yongli; Luo, Peng; Wu, Chong

    2014-09-01

    Similarity measurement for the network node has been paid increasing attention in the field of statistical physics. In this paper, we propose an entropy-based information loss method to measure the node similarity. The whole model is established based on this idea that less information loss is caused by seeing two more similar nodes as the same. The proposed new method has relatively low algorithm complexity, making it less time-consuming and more efficient to deal with the large scale real-world network. In order to clarify its availability and accuracy, this new approach was compared with some other selected approaches on two artificial examples and synthetic networks. Furthermore, the proposed method is also successfully applied to predict the network evolution and predict the unknown nodes' attributions in the two application examples.

  20. A self-similar magnetohydrodynamic model for ball lightnings

    International Nuclear Information System (INIS)

    Tsui, K. H.

    2006-01-01

    Ball lightning is modeled by magnetohydrodynamic (MHD) equations in two-dimensional spherical geometry with azimuthal symmetry. Dynamic evolutions in the radial direction are described by the self-similar evolution function y(t). The plasma pressure, mass density, and magnetic fields are solved in terms of the radial label η. This model gives spherical MHD plasmoids with axisymmetric force-free magnetic field, and spherically symmetric plasma pressure and mass density, which self-consistently determine the polytropic index γ. The spatially oscillating nature of the radial and meridional field structures indicate embedded regions of closed field lines. These regions are named secondary plasmoids, whereas the overall self-similar spherical structure is named the primary plasmoid. According to this model, the time evolution function allows the primary plasmoid expand outward in two modes. The corresponding ejection of the embedded secondary plasmoids results in ball lightning offering an answer as how they come into being. The first is an accelerated expanding mode. This mode appears to fit plasmoids ejected from thundercloud tops with acceleration to ionosphere seen in high altitude atmospheric observations of sprites and blue jets. It also appears to account for midair high-speed ball lightning overtaking airplanes, and ground level high-speed energetic ball lightning. The second is a decelerated expanding mode, and it appears to be compatible to slowly moving ball lightning seen near ground level. The inverse of this second mode corresponds to an accelerated inward collapse, which could bring ball lightning to an end sometimes with a cracking sound

  1. A low-cost approach to electronic excitation energies based on the driven similarity renormalization group

    Science.gov (United States)

    Li, Chenyang; Verma, Prakash; Hannon, Kevin P.; Evangelista, Francesco A.

    2017-08-01

    We propose an economical state-specific approach to evaluate electronic excitation energies based on the driven similarity renormalization group truncated to second order (DSRG-PT2). Starting from a closed-shell Hartree-Fock wave function, a model space is constructed that includes all single or single and double excitations within a given set of active orbitals. The resulting VCIS-DSRG-PT2 and VCISD-DSRG-PT2 methods are introduced and benchmarked on a set of 28 organic molecules [M. Schreiber et al., J. Chem. Phys. 128, 134110 (2008)]. Taking CC3 results as reference values, mean absolute deviations of 0.32 and 0.22 eV are observed for VCIS-DSRG-PT2 and VCISD-DSRG-PT2 excitation energies, respectively. Overall, VCIS-DSRG-PT2 yields results with accuracy comparable to those from time-dependent density functional theory using the B3LYP functional, while VCISD-DSRG-PT2 gives excitation energies comparable to those from equation-of-motion coupled cluster with singles and doubles.

  2. A Self-Organizing Map-Based Approach to Generating Reduced-Size, Statistically Similar Climate Datasets

    Science.gov (United States)

    Cabell, R.; Delle Monache, L.; Alessandrini, S.; Rodriguez, L.

    2015-12-01

    Climate-based studies require large amounts of data in order to produce accurate and reliable results. Many of these studies have used 30-plus year data sets in order to produce stable and high-quality results, and as a result, many such data sets are available, generally in the form of global reanalyses. While the analysis of these data lead to high-fidelity results, its processing can be very computationally expensive. This computational burden prevents the utilization of these data sets for certain applications, e.g., when rapid response is needed in crisis management and disaster planning scenarios resulting from release of toxic material in the atmosphere. We have developed a methodology to reduce large climate datasets to more manageable sizes while retaining statistically similar results when used to produce ensembles of possible outcomes. We do this by employing a Self-Organizing Map (SOM) algorithm to analyze general patterns of meteorological fields over a regional domain of interest to produce a small set of "typical days" with which to generate the model ensemble. The SOM algorithm takes as input a set of vectors and generates a 2D map of representative vectors deemed most similar to the input set and to each other. Input predictors are selected that are correlated with the model output, which in our case is an Atmospheric Transport and Dispersion (T&D) model that is highly dependent on surface winds and boundary layer depth. To choose a subset of "typical days," each input day is assigned to its closest SOM map node vector and then ranked by distance. Each node vector is treated as a distribution and days are sampled from them by percentile. Using a 30-node SOM, with sampling every 20th percentile, we have been able to reduce 30 years of the Climate Forecast System Reanalysis (CFSR) data for the month of October to 150 "typical days." To estimate the skill of this approach, the "Measure of Effectiveness" (MOE) metric is used to compare area and overlap

  3. Levy Stable Processes. From Stationary to Self-Similar Dynamics and Back. An Application to Finance

    International Nuclear Information System (INIS)

    Burnecki, K.; Weron, A.

    2004-01-01

    We employ an ergodic theory argument to demonstrate the foundations of ubiquity of Levy stable self-similar processes in physics and present a class of models for anomalous and nonextensive diffusion. A relationship between stationary and self-similar models is clarified. The presented stochastic integral description of all Levy stable processes could provide new insights into the mechanism underlying a range of self-similar natural phenomena. Finally, this effect is illustrated by self-similar approach to financial modelling. (author)

  4. Models for discrete-time self-similar vector processes with application to network traffic

    Science.gov (United States)

    Lee, Seungsin; Rao, Raghuveer M.; Narasimha, Rajesh

    2003-07-01

    The paper defines self-similarity for vector processes by employing the discrete-time continuous-dilation operation which has successfully been used previously by the authors to define 1-D discrete-time stochastic self-similar processes. To define self-similarity of vector processes, it is required to consider the cross-correlation functions between different 1-D processes as well as the autocorrelation function of each constituent 1-D process in it. System models to synthesize self-similar vector processes are constructed based on the definition. With these systems, it is possible to generate self-similar vector processes from white noise inputs. An important aspect of the proposed models is that they can be used to synthesize various types of self-similar vector processes by choosing proper parameters. Additionally, the paper presents evidence of vector self-similarity in two-channel wireless LAN data and applies the aforementioned systems to simulate the corresponding network traffic traces.

  5. New similarity of triangular fuzzy number and its application.

    Science.gov (United States)

    Zhang, Xixiang; Ma, Weimin; Chen, Liping

    2014-01-01

    The similarity of triangular fuzzy numbers is an important metric for application of it. There exist several approaches to measure similarity of triangular fuzzy numbers. However, some of them are opt to be large. To make the similarity well distributed, a new method SIAM (Shape's Indifferent Area and Midpoint) to measure triangular fuzzy number is put forward, which takes the shape's indifferent area and midpoint of two triangular fuzzy numbers into consideration. Comparison with other similarity measurements shows the effectiveness of the proposed method. Then, it is applied to collaborative filtering recommendation to measure users' similarity. A collaborative filtering case is used to illustrate users' similarity based on cloud model and triangular fuzzy number; the result indicates that users' similarity based on triangular fuzzy number can obtain better discrimination. Finally, a simulated collaborative filtering recommendation system is developed which uses cloud model and triangular fuzzy number to express users' comprehensive evaluation on items, and result shows that the accuracy of collaborative filtering recommendation based on triangular fuzzy number is higher.

  6. A generalized approach for historical mock-up acquisition and data modelling: Towards historically enriched 3D city models

    Science.gov (United States)

    Hervy, B.; Billen, R.; Laroche, F.; Carré, C.; Servières, M.; Van Ruymbeke, M.; Tourre, V.; Delfosse, V.; Kerouanton, J.-L.

    2012-10-01

    Museums are filled with hidden secrets. One of those secrets lies behind historical mock-ups whose signification goes far behind a simple representation of a city. We face the challenge of designing, storing and showing knowledge related to these mock-ups in order to explain their historical value. Over the last few years, several mock-up digitalisation projects have been realised. Two of them, Nantes 1900 and Virtual Leodium, propose innovative approaches that present a lot of similarities. This paper presents a framework to go one step further by analysing their data modelling processes and extracting what could be a generalized approach to build a numerical mock-up and the knowledge database associated. Geometry modelling and knowledge modelling influence each other and are conducted in a parallel process. Our generalized approach describes a global overview of what can be a data modelling process. Our next goal is obviously to apply this global approach on other historical mock-up, but we also think about applying it to other 3D objects that need to embed semantic data, and approaching historically enriched 3D city models.

  7. Experimental Study of Dowel Bar Alternatives Based on Similarity Model Test

    Directory of Open Access Journals (Sweden)

    Chichun Hu

    2017-01-01

    Full Text Available In this study, a small-scaled accelerated loading test based on similarity theory and Accelerated Pavement Analyzer was developed to evaluate dowel bars with different materials and cross-sections. Jointed concrete specimen consisting of one dowel was designed as scaled model for the test, and each specimen was subjected to 864 thousand loading cycles. Deflections between jointed slabs were measured with dial indicators, and strains of the dowel bars were monitored with strain gauges. The load transfer efficiency, differential deflection, and dowel-concrete bearing stress for each case were calculated from these measurements. The test results indicated that the effect of the dowel modulus on load transfer efficiency can be characterized based on the similarity model test developed in the study. Moreover, round steel dowel was found to have similar performance to larger FRP dowel, and elliptical dowel can be preferentially considered in practice.

  8. Understanding similarity of groundwater systems with empirical copulas

    Science.gov (United States)

    Haaf, Ezra; Kumar, Rohini; Samaniego, Luis; Barthel, Roland

    2016-04-01

    Within the classification framework for groundwater systems that aims for identifying similarity of hydrogeological systems and transferring information from a well-observed to an ungauged system (Haaf and Barthel, 2015; Haaf and Barthel, 2016), we propose a copula-based method for describing groundwater-systems similarity. Copulas are an emerging method in hydrological sciences that make it possible to model the dependence structure of two groundwater level time series, independently of the effects of their marginal distributions. This study is based on Samaniego et al. (2010), which described an approach calculating dissimilarity measures from bivariate empirical copula densities of streamflow time series. Subsequently, streamflow is predicted in ungauged basins by transferring properties from similar catchments. The proposed approach is innovative because copula-based similarity has not yet been applied to groundwater systems. Here we estimate the pairwise dependence structure of 600 wells in Southern Germany using 10 years of weekly groundwater level observations. Based on these empirical copulas, dissimilarity measures are estimated, such as the copula's lower- and upper corner cumulated probability, copula-based Spearman's rank correlation - as proposed by Samaniego et al. (2010). For the characterization of groundwater systems, copula-based metrics are compared with dissimilarities obtained from precipitation signals corresponding to the presumed area of influence of each groundwater well. This promising approach provides a new tool for advancing similarity-based classification of groundwater system dynamics. Haaf, E., Barthel, R., 2015. Methods for assessing hydrogeological similarity and for classification of groundwater systems on the regional scale, EGU General Assembly 2015, Vienna, Austria. Haaf, E., Barthel, R., 2016. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs EGU General Assembly

  9. MAC/FAC: A Model of Similarity-Based Retrieval

    Science.gov (United States)

    1994-10-01

    Grapes (0.28) 327 Sour Grapes, analog The Taming of the Shrew (0.22), Merry Wives 251 (0.18), S[11 stories], Sour Grapes (-0.19) Sour Grapes, literal... The Institute for the 0 1 Learning Sciences Northwestern University CD• 00 MAC/FAC: A MODEL OF SIMILARITY-BASED RETRIEVAL Kenneth D. Forbus Dedre...Gentner Keith Law Technical Report #59 • October 1994 94-35188 wit Establisthed in 1989 with the support of Andersen Consulting Form Approved REPORT

  10. Similarity Assessment of Land Surface Model Outputs in the North American Land Data Assimilation System

    Science.gov (United States)

    Kumar, Sujay V.; Wang, Shugong; Mocko, David M.; Peters-Lidard, Christa D.; Xia, Youlong

    2017-11-01

    Multimodel ensembles are often used to produce ensemble mean estimates that tend to have increased simulation skill over any individual model output. If multimodel outputs are too similar, an individual LSM would add little additional information to the multimodel ensemble, whereas if the models are too dissimilar, it may be indicative of systematic errors in their formulations or configurations. The article presents a formal similarity assessment of the North American Land Data Assimilation System (NLDAS) multimodel ensemble outputs to assess their utility to the ensemble, using a confirmatory factor analysis. Outputs from four NLDAS Phase 2 models currently running in operations at NOAA/NCEP and four new/upgraded models that are under consideration for the next phase of NLDAS are employed in this study. The results show that the runoff estimates from the LSMs were most dissimilar whereas the models showed greater similarity for root zone soil moisture, snow water equivalent, and terrestrial water storage. Generally, the NLDAS operational models showed weaker association with the common factor of the ensemble and the newer versions of the LSMs showed stronger association with the common factor, with the model similarity increasing at longer time scales. Trade-offs between the similarity metrics and accuracy measures indicated that the NLDAS operational models demonstrate a larger span in the similarity-accuracy space compared to the new LSMs. The results of the article indicate that simultaneous consideration of model similarity and accuracy at the relevant time scales is necessary in the development of multimodel ensemble.

  11. Similarity transformed coupled cluster response (ST-CCR) theory--a time-dependent similarity transformed equation-of-motion coupled cluster (STEOM-CC) approach.

    Science.gov (United States)

    Landau, Arie

    2013-07-07

    This paper presents a new method for calculating spectroscopic properties in the framework of response theory utilizing a sequence of similarity transformations (STs). The STs are preformed using the coupled cluster (CC) and Fock-space coupled cluster operators. The linear and quadratic response functions of the new similarity transformed CC response (ST-CCR) method are derived. The poles of the linear response yield excitation-energy (EE) expressions identical to the ones in the similarity transformed equation-of-motion coupled cluster (STEOM-CC) approach. ST-CCR and STEOM-CC complement each other, in analogy to the complementarity of CC response (CCR) and equation-of-motion coupled cluster (EOM-CC). ST-CCR/STEOM-CC and CCR/EOM-CC yield size-extensive and size-intensive EEs, respectively. Other electronic-properties, e.g., transition dipole strengths, are also size-extensive within ST-CCR, in contrast to STEOM-CC. Moreover, analysis suggests that in comparison with CCR, the ST-CCR expressions may be confined to a smaller subspace, however, the precise scope of the truncation can only be determined numerically. In addition, reformulation of the time-independent STEOM-CC using the same parameterization as in ST-CCR, as well as an efficient truncation scheme, is presented. The shown convergence of the time-dependent and time-independent expressions displays the completeness of the presented formalism.

  12. Improved personalized recommendation based on a similarity network

    Science.gov (United States)

    Wang, Ximeng; Liu, Yun; Xiong, Fei

    2016-08-01

    A recommender system helps individual users find the preferred items rapidly and has attracted extensive attention in recent years. Many successful recommendation algorithms are designed on bipartite networks, such as network-based inference or heat conduction. However, most of these algorithms define the resource-allocation methods for an average allocation. That is not reasonable because average allocation cannot indicate the user choice preference and the influence between users which leads to a series of non-personalized recommendation results. We propose a personalized recommendation approach that combines the similarity function and bipartite network to generate a similarity network that improves the resource-allocation process. Our model introduces user influence into the recommender system and states that the user influence can make the resource-allocation process more reasonable. We use four different metrics to evaluate our algorithms for three benchmark data sets. Experimental results show that the improved recommendation on a similarity network can obtain better accuracy and diversity than some competing approaches.

  13. Approach to analysis of inter-regional similarity of investment activity support measures in legislation of regions (on the example of Krasnoyarsk region

    Directory of Open Access Journals (Sweden)

    Valentina F. Lapo

    2017-01-01

    revealed concurrence of dynamics to use some stimulation methods in Krasnoyarsk region and in the other regions of the Russian Federation for 2005 - 2016. Among them, there are some measures of subsidizing, concessionary terms of using the ground and granting of the ground areas, concession agreements on property and creation of industrial parks, state-private partnership and others. We have found the groups of regions in which there are the trends on harmonization or on a divergence to use the separate kinds of stimulation.Thus, the offered approach and a set of measuring instruments enable to carry out the research of inter-regional similarity of legal documents. It allows receiving the answer to a question on a degree of similarity of systems of investment activity stimulation, accepted in regions; to estimate joint dynamics of stimulation systems’ changes; to define a degree of concurrence on separate measures of support; to analyze similarity of a current condition and processes on the change of the legislation. The approach permits to investigate directions of harmonization or a divergence of regional approaches to investment activity stimulation. The received results can form a base for the further economic-statistical and econometric researches of efficiency of methods for the investment activity stimulation. It will allow to structure objects of research, to allocate more homogeneous group of regions on stimulation methods. The proximity coefficient matrix will be especially useful in spatial econometric models. The offered approach and parameters can be applied to research the other positions of the regional legislation.

  14. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  15. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-box model for the specific dynamics is identified. Similarly, an on-line software sensor for detecting the occurrence of backwater phenomena can be developed by comparing the dynamics of a flow measurement with a nearby level measurement. For treatment plants it is found that grey-box models applied to on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey...

  16. Analysis of metal forming processes by using physical modeling and new plastic similarity condition

    International Nuclear Information System (INIS)

    Gronostajski, Z.; Hawryluk, M.

    2007-01-01

    In recent years many advances have been made in numerical methods, for linear and non-linear problems. However the success of them depends very much on the correctness of the problem formulation and the availability of the input data. Validity of the theoretical results can be verified by an experiment using the real or soft materials. An essential reduction of time and costs of the experiment can be obtained by using soft materials, which behaves in a way analogous to that of real metal during deformation. The advantages of using of the soft materials are closely connected with flow stress 500 to 1000 times lower than real materials. The accuracy of physical modeling depend on the similarity conditions between physical model and real process. The most important similarity conditions are materials similarity in the range of plastic and elastic deformation, geometrical, frictional and thermal similarities. New original plastic similarity condition for physical modeling of metal forming processes is proposed in the paper. It bases on the mathematical description of similarity of the flow stress curves of soft materials and real ones

  17. Mapping behavioral landscapes for animal movement: a finite mixture modeling approach

    Science.gov (United States)

    Tracey, Jeff A.; Zhu, Jun; Boydston, Erin E.; Lyren, Lisa M.; Fisher, Robert N.; Crooks, Kevin R.

    2013-01-01

    Because of its role in many ecological processes, movement of animals in response to landscape features is an important subject in ecology and conservation biology. In this paper, we develop models of animal movement in relation to objects or fields in a landscape. We take a finite mixture modeling approach in which the component densities are conceptually related to different choices for movement in response to a landscape feature, and the mixing proportions are related to the probability of selecting each response as a function of one or more covariates. We combine particle swarm optimization and an Expectation-Maximization (EM) algorithm to obtain maximum likelihood estimates of the model parameters. We use this approach to analyze data for movement of three bobcats in relation to urban areas in southern California, USA. A behavioral interpretation of the models revealed similarities and differences in bobcat movement response to urbanization. All three bobcats avoided urbanization by moving either parallel to urban boundaries or toward less urban areas as the proportion of urban land cover in the surrounding area increased. However, one bobcat, a male with a dispersal-like large-scale movement pattern, avoided urbanization at lower densities and responded strictly by moving parallel to the urban edge. The other two bobcats, which were both residents and occupied similar geographic areas, avoided urban areas using a combination of movements parallel to the urban edge and movement toward areas of less urbanization. However, the resident female appeared to exhibit greater repulsion at lower levels of urbanization than the resident male, consistent with empirical observations of bobcats in southern California. Using the parameterized finite mixture models, we mapped behavioral states to geographic space, creating a representation of a behavioral landscape. This approach can provide guidance for conservation planning based on analysis of animal movement data using

  18. A novel approach for runoff modelling in ungauged catchments by Catchment Morphing

    Science.gov (United States)

    Zhang, J.; Han, D.

    2017-12-01

    Runoff prediction in ungauged catchments has been one of the major challenges in the past decades. However, due to the tremendous heterogeneity of hydrological catchments, obstacles exist in deducing model parameters for ungauged catchments from gauged ones. We propose a novel approach to predict ungauged runoff with Catchment Morphing (CM) using a fully distributed model. CM is defined as by changing the catchment characteristics (area and slope here) from the baseline model built with a gauged catchment to model the ungauged ones. The advantages of CM are: (a) less demand of the similarity between the baseline catchment and the ungauged catchment, (b) less demand of available data, and (c) potentially applicable in varied catchments. A case study on seven catchments in the UK has been used to demonstrate the proposed scheme. To comprehensively examine the CM approach, distributed rainfall inputs are utilised in the model, and fractal landscapes are used to morph the land surface from the baseline model to the target model. The preliminary results demonstrate the feasibility of the approach, which is promising in runoff simulation for ungauged catchments. Clearly, more work beyond this pilot study is needed to explore and develop this new approach further to maturity by the hydrological community.

  19. Similarity-Based Unification: A Multi-Adjoint Approach

    Czech Academy of Sciences Publication Activity Database

    Medina, J.; Ojeda-Aciego, M.; Vojtáš, Peter

    2004-01-01

    Roč. 146, č. 1 (2004), s. 43-62 ISSN 0165-0114 Source of funding: V - iné verejné zdroje Keywords : similarity * fuzzy unification Subject RIV: BA - General Mathematics Impact factor: 0.734, year: 2004

  20. A grammar-based semantic similarity algorithm for natural language sentences.

    Science.gov (United States)

    Lee, Ming Che; Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to "artificial language", such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.

  1. MULTI-LEVEL SAMPLING APPROACH FOR CONTINOUS LOSS DETECTION USING ITERATIVE WINDOW AND STATISTICAL MODEL

    OpenAIRE

    Mohd Fo'ad Rohani; Mohd Aizaini Maarof; Ali Selamat; Houssain Kettani

    2010-01-01

    This paper proposes a Multi-Level Sampling (MLS) approach for continuous Loss of Self-Similarity (LoSS) detection using iterative window. The method defines LoSS based on Second Order Self-Similarity (SOSS) statistical model. The Optimization Method (OM) is used to estimate self-similarity parameter since it is fast and more accurate in comparison with other estimation methods known in the literature. Probability of LoSS detection is introduced to measure continuous LoSS detection performance...

  2. Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor

    Directory of Open Access Journals (Sweden)

    Ye Li

    2017-01-01

    Full Text Available Recommended system is beneficial to e-commerce sites, which provides customers with product information and recommendations; the recommendation system is currently widely used in many fields. In an era of information explosion, the key challenges of the recommender system is to obtain valid information from the tremendous amount of information and produce high quality recommendations. However, when facing the large mount of information, the traditional collaborative filtering algorithm usually obtains a high degree of sparseness, which ultimately lead to low accuracy recommendations. To tackle this issue, we propose a novel algorithm named Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor, which is based on the trust model and is combined with the user similarity. The novel algorithm takes into account the degree of interest overlap between the two users and results in a superior performance to the recommendation based on Trust Model in criteria of Precision, Recall, Diversity and Coverage. Additionally, the proposed model can effectively improve the efficiency of collaborative filtering algorithm and achieve high performance.

  3. Processes of Similarity Judgment

    Science.gov (United States)

    Larkey, Levi B.; Markman, Arthur B.

    2005-01-01

    Similarity underlies fundamental cognitive capabilities such as memory, categorization, decision making, problem solving, and reasoning. Although recent approaches to similarity appreciate the structure of mental representations, they differ in the processes posited to operate over these representations. We present an experiment that…

  4. Unveiling Music Structure Via PLSA Similarity Fusion

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Meng, Anders; Petersen, Kaare Brandt

    2007-01-01

    Nowadays there is an increasing interest in developing methods for building music recommendation systems. In order to get a satisfactory performance from such a system, one needs to incorporate as much information about songs similarity as possible; however, how to do so is not obvious. In this p......Nowadays there is an increasing interest in developing methods for building music recommendation systems. In order to get a satisfactory performance from such a system, one needs to incorporate as much information about songs similarity as possible; however, how to do so is not obvious...... observed similarities can be satisfactorily explained using the latent semantics. Additionally, this approach significantly simplifies the song retrieval phase, leading to a more practical system implementation. The suitability of the PLSA model for representing music structure is studied in a simplified...

  5. Model-free aftershock forecasts constructed from similar sequences in the past

    Science.gov (United States)

    van der Elst, N.; Page, M. T.

    2017-12-01

    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity

  6. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li

    2011-01-01

    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  7. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    Science.gov (United States)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  8. Approaches to long-term conditions management and care for older people: similarities or differences?

    Science.gov (United States)

    Tullett, Michael; Neno, Rebecca

    2008-03-01

    In the past few years, there has been an increased emphasis both on the care for older people and the management of long-term conditions within the United Kingdom. Currently, the Department of Health and the Scottish Executive identify and manage these two areas as separate entities. The aim of this article is to examine the current approaches to both of these areas of care and identify commonalities and articulate differences. The population across the world and particularly within the United Kingdom is ageing at an unprecedented rate. The numbers suffering long-term illness conditions has also risen sharply in recent years. As such, nurses need to be engaged at a strategic level in the design of robust and appropriate services for this increasing population group. A comprehensive literature review on long-term conditions and the care of older people was undertaken in an attempt to identify commonalities and differences in strategic and organizational approaches. A policy analysis was conducted to support the paper and establish links that may inform local service development. Proposing service development based on identified needs rather than organizational boundaries after the establishment of clear links between health and social care for those with long-term conditions and the ageing population. Nurse Managers need to be aware of the similarities and differences in political and theoretical approaches to the care for older people and the management of long-term conditions. By adopting this view, creativity in the service redesign and service provision can be fostered and nurtured as well as achieving a renewed focus on partnership working across organizational boundaries. With the current renewed political focus on health and social care, there is an opportunity in the UK to redefine the structure of care. This paper proposes similarities between caring for older people and for those with long-term conditions, and it is proposed these encapsulate the wider

  9. Training of tonal similarity ratings in non-musicians: a "rapid learning" approach.

    Science.gov (United States)

    Oechslin, Mathias S; Läge, Damian; Vitouch, Oliver

    2012-01-01

    Although cognitive music psychology has a long tradition of expert-novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based "rapid learning" paradigm, participants had to decide for single tones and chords whether paired sounds matched each other well. Before and after the training sessions, they provided similarity judgments for a complete set of sound pairs. From these similarity matrices, individual relational sound maps, intended to display mental representations, were calculated by means of non-metric multidimensional scaling (NMDS), and were compared to an expert model through procrustean transformation. Approximately half of the novices showed substantial learning success, with some participants even reaching the level of professional musicians. Results speak for a fundamental ability to quickly train an understanding of harmony, show inter-individual differences in learning success, and demonstrate the suitability of the scaling method used for learning research in music and other domains. Results are discussed in the context of the "giftedness" debate.

  10. Training of tonal similarity ratings in non-musicians: a rapid learning approach

    Directory of Open Access Journals (Sweden)

    Mathias S Oechslin

    2012-05-01

    Full Text Available Although music psychology has a long tradition of expert-novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based rapid learning paradigm, participants had to decide for single tones and chords whether paired sounds matched each other well. Before and after the training sessions, they provided similarity judgments for a complete set of sound pairs. From these similarity matrices, individual relational sound maps, aiming to map the mental representations, were calculated by means of non-metric multidimensional scaling (NMDS, which were compared to an expert model through procrustean transformation. Approximately half of the novices showed substantial learning success, with some participants even reaching the level of professional musicians. Results speak for a fundamental ability to quickly train an understanding of harmony, show inter-individual differences in learning success, and demonstrate the suitability of the scaling method used for music psychological research. Results are discussed in the context of the giftedness debate.

  11. A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences

    Science.gov (United States)

    Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure. PMID:24982952

  12. A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences

    Directory of Open Access Journals (Sweden)

    Ming Che Lee

    2014-01-01

    Full Text Available This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.

  13. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    Science.gov (United States)

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  14. Benchmarking whole-building energy performance with multi-criteria technique for order preference by similarity to ideal solution using a selective objective-weighting approach

    International Nuclear Information System (INIS)

    Wang, Endong

    2015-01-01

    Highlights: • A TOPSIS based multi-criteria whole-building energy benchmarking is developed. • A selective objective-weighting procedure is used for a cost-accuracy tradeoff. • Results from a real case validated the benefits of the presented approach. - Abstract: This paper develops a robust multi-criteria Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based building energy efficiency benchmarking approach. The approach is explicitly selective to address multicollinearity trap due to the subjectivity in selecting energy variables by considering cost-accuracy trade-off. It objectively weights the relative importance of individual pertinent efficiency measuring criteria using either multiple linear regression or principal component analysis contingent on meta data quality. Through this approach, building energy performance is comprehensively evaluated and optimized. Simultaneously, the significant challenges associated with conventional single-criterion benchmarking models can be avoided. Together with a clustering algorithm on a three-year panel dataset, the benchmarking case of 324 single-family dwellings demonstrated an improved robustness of the presented multi-criteria benchmarking approach over the conventional single-criterion ones

  15. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  16. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  17. Similarly shaped letters evoke similar colors in grapheme-color synesthesia.

    Science.gov (United States)

    Brang, David; Rouw, Romke; Ramachandran, V S; Coulson, Seana

    2011-04-01

    Grapheme-color synesthesia is a neurological condition in which viewing numbers or letters (graphemes) results in the concurrent sensation of color. While the anatomical substrates underlying this experience are well understood, little research to date has investigated factors influencing the particular colors associated with particular graphemes or how synesthesia occurs developmentally. A recent suggestion of such an interaction has been proposed in the cascaded cross-tuning (CCT) model of synesthesia, which posits that in synesthetes connections between grapheme regions and color area V4 participate in a competitive activation process, with synesthetic colors arising during the component-stage of grapheme processing. This model more directly suggests that graphemes sharing similar component features (lines, curves, etc.) should accordingly activate more similar synesthetic colors. To test this proposal, we created and regressed synesthetic color-similarity matrices for each of 52 synesthetes against a letter-confusability matrix, an unbiased measure of visual similarity among graphemes. Results of synesthetes' grapheme-color correspondences indeed revealed that more similarly shaped graphemes corresponded with more similar synesthetic colors, with stronger effects observed in individuals with more intense synesthetic experiences (projector synesthetes). These results support the CCT model of synesthesia, implicate early perceptual mechanisms as driving factors in the elicitation of synesthetic hues, and further highlight the relationship between conceptual and perceptual factors in this phenomenon. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Similarity Measure of Graphs

    Directory of Open Access Journals (Sweden)

    Amine Labriji

    2017-07-01

    Full Text Available The topic of identifying the similarity of graphs was considered as highly recommended research field in the Web semantic, artificial intelligence, the shape recognition and information research. One of the fundamental problems of graph databases is finding similar graphs to a graph query. Existing approaches dealing with this problem are usually based on the nodes and arcs of the two graphs, regardless of parental semantic links. For instance, a common connection is not identified as being part of the similarity of two graphs in cases like two graphs without common concepts, the measure of similarity based on the union of two graphs, or the one based on the notion of maximum common sub-graph (SCM, or the distance of edition of graphs. This leads to an inadequate situation in the context of information research. To overcome this problem, we suggest a new measure of similarity between graphs, based on the similarity measure of Wu and Palmer. We have shown that this new measure satisfies the properties of a measure of similarities and we applied this new measure on examples. The results show that our measure provides a run time with a gain of time compared to existing approaches. In addition, we compared the relevance of the similarity values obtained, it appears that this new graphs measure is advantageous and  offers a contribution to solving the problem mentioned above.

  19. A similarity score-based two-phase heuristic approach to solve the dynamic cellular facility layout for manufacturing systems

    Science.gov (United States)

    Kumar, Ravi; Singh, Surya Prakash

    2017-11-01

    The dynamic cellular facility layout problem (DCFLP) is a well-known NP-hard problem. It has been estimated that the efficient design of DCFLP reduces the manufacturing cost of products by maintaining the minimum material flow among all machines in all cells, as the material flow contributes around 10-30% of the total product cost. However, being NP hard, solving the DCFLP optimally is very difficult in reasonable time. Therefore, this article proposes a novel similarity score-based two-phase heuristic approach to solve the DCFLP optimally considering multiple products in multiple times to be manufactured in the manufacturing layout. In the first phase of the proposed heuristic, a machine-cell cluster is created based on similarity scores between machines. This is provided as an input to the second phase to minimize inter/intracell material handling costs and rearrangement costs over the entire planning period. The solution methodology of the proposed approach is demonstrated. To show the efficiency of the two-phase heuristic approach, 21 instances are generated and solved using the optimization software package LINGO. The results show that the proposed approach can optimally solve the DCFLP in reasonable time.

  20. Hierarchical Matching of Traffic Information Services Using Semantic Similarity

    Directory of Open Access Journals (Sweden)

    Zongtao Duan

    2018-01-01

    Full Text Available Service matching aims to find the information similar to a given query, which has numerous applications in web search. Although existing methods yield promising results, they are not applicable for transportation. In this paper, we propose a multilevel matching method based on semantic technology, towards efficiently searching the traffic information requested. Our approach is divided into two stages: service clustering, which prunes candidate services that are not promising, and functional matching. The similarity at function level between services is computed by grouping the connections between the services into inheritance and noninheritance relationships. We also developed a three-layer framework with a semantic similarity measure that requires less time and space cost than existing method since the scale of candidate services is significantly smaller than the whole transportation network. The OWL_TC4 based service set was used to verify the proposed approach. The accuracy of offline service clustering reached 93.80%, and it reduced the response time to 651 ms when the total number of candidate services was 1000. Moreover, given the different thresholds for the semantic similarity measure, the proposed mixed matching model did better in terms of recall and precision (i.e., up to 72.7% and 80%, respectively, for more than 1000 services compared to the compared models based on information theory and taxonomic distance. These experimental results confirmed the effectiveness and validity of service matching for responding quickly and accurately to user queries.

  1. Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments

    Science.gov (United States)

    Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Storrs, Katherine R.; Mur, Marieke

    2017-01-01

    Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features

  2. Self-similar measures in multi-sector endogenous growth models

    International Nuclear Information System (INIS)

    La Torre, Davide; Marsiglio, Simone; Mendivil, Franklin; Privileggi, Fabio

    2015-01-01

    We analyze two types of stochastic discrete time multi-sector endogenous growth models, namely a basic Uzawa–Lucas (1965, 1988) model and an extended three-sector version as in La Torre and Marsiglio (2010). As in the case of sustained growth the optimal dynamics of the state variables are not stationary, we focus on the dynamics of the capital ratio variables, and we show that, through appropriate log-transformations, they can be converted into affine iterated function systems converging to an invariant distribution supported on some (possibly fractal) compact set. This proves that also the steady state of endogenous growth models—i.e., the stochastic balanced growth path equilibrium—might have a fractal nature. We also provide some sufficient conditions under which the associated self-similar measures turn out to be either singular or absolutely continuous (for the three-sector model we only consider the singularity).

  3. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  4. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  5. Estimating the surface layer refractive index structure constant over snow and sea ice using Monin-Obukhov similarity theory with a mesoscale atmospheric model.

    Science.gov (United States)

    Qing, Chun; Wu, Xiaoqing; Huang, Honghua; Tian, Qiguo; Zhu, Wenyue; Rao, Ruizhong; Li, Xuebin

    2016-09-05

    Since systematic direct measurements of refractive index structure constant ( Cn2) for many climates and seasons are not available, an indirect approach is developed in which Cn2 is estimated from the mesoscale atmospheric model outputs. In previous work, we have presented an approach that a state-of-the-art mesoscale atmospheric model called Weather Research and Forecasting (WRF) model coupled with Monin-Obukhov Similarity (MOS) theory which can be used to estimate surface layer Cn2 over the ocean. Here this paper is focused on surface layer Cn2 over snow and sea ice, which is the extending of estimating surface layer Cn2 utilizing WRF model for ground-based optical application requirements. This powerful approach is validated against the corresponding 9-day Cn2 data from a field campaign of the 30th Chinese National Antarctic Research Expedition (CHINARE). We employ several statistical operators to assess how this approach performs. Besides, we present an independent analysis of this approach performance using the contingency tables. Such a method permits us to provide supplementary key information with respect to statistical operators. These methods make our analysis more robust and permit us to confirm the excellent performances of this approach. The reasonably good agreement in trend and magnitude is found between estimated values and measurements overall, and the estimated Cn2 values are even better than the ones obtained by this approach over the ocean surface layer. The encouraging performance of this approach has a concrete practical implementation of ground-based optical applications over snow and sea ice.

  6. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  7. LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation.

    Science.gov (United States)

    Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl

    2015-08-01

    Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average

  8. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Science.gov (United States)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and

  9. Analysis and Modeling of Time-Correlated Characteristics of Rainfall-Runoff Similarity in the Upstream Red River Basin

    Directory of Open Access Journals (Sweden)

    Xiuli Sang

    2012-01-01

    Full Text Available We constructed a similarity model (based on Euclidean distance between rainfall and runoff to study time-correlated characteristics of rainfall-runoff similar patterns in the upstream Red River Basin and presented a detailed evaluation of the time correlation of rainfall-runoff similarity. The rainfall-runoff similarity was used to determine the optimum similarity. The results showed that a time-correlated model was found to be capable of predicting the rainfall-runoff similarity in the upstream Red River Basin in a satisfactory way. Both noised and denoised time series by thresholding the wavelet coefficients were applied to verify the accuracy of model. And the corresponding optimum similar sets obtained as the equation solution conditions showed an interesting and stable trend. On the whole, the annual mean similarity presented a gradually rising trend, for quantitatively estimating comprehensive influence of climate change and of human activities on rainfall-runoff similarity.

  10. Predictive modeling of human perception subjectivity: feasibility study of mammographic lesion similarity

    Science.gov (United States)

    Xu, Songhua; Hudson, Kathleen; Bradley, Yong; Daley, Brian J.; Frederick-Dyer, Katherine; Tourassi, Georgia

    2012-02-01

    The majority of clinical content-based image retrieval (CBIR) studies disregard human perception subjectivity, aiming to duplicate the consensus expert assessment of the visual similarity on example cases. The purpose of our study is twofold: i) discern better the extent of human perception subjectivity when assessing the visual similarity of two images with similar semantic content, and (ii) explore the feasibility of personalized predictive modeling of visual similarity. We conducted a human observer study in which five observers of various expertise were shown ninety-nine triplets of mammographic masses with similar BI-RADS descriptors and were asked to select the two masses with the highest visual relevance. Pairwise agreement ranged between poor and fair among the five observers, as assessed by the kappa statistic. The observers' self-consistency rate was remarkably low, based on repeated questions where either the orientation or the presentation order of a mass was changed. Various machine learning algorithms were explored to determine whether they can predict each observer's personalized selection using textural features. Many algorithms performed with accuracy that exceeded each observer's self-consistency rate, as determined using a cross-validation scheme. This accuracy was statistically significantly higher than would be expected by chance alone (two-tailed p-value ranged between 0.001 and 0.01 for all five personalized models). The study confirmed that human perception subjectivity should be taken into account when developing CBIR-based medical applications.

  11. The importance of interlinguistic similarity and stable bilingualism when two languages compete

    International Nuclear Information System (INIS)

    Mira, J; Seoane, L F; Nieto, J J

    2011-01-01

    One approach for analyzing the dynamics of two languages in competition is to fit historical data for the number of speakers of each with a mathematical model in which the parameters are interpreted as the similarity between those languages and their relative status. Within this approach, on the basis of a detailed analysis and extensive calculations, we show the outcomes that can emerge for given values of these parameters. In contrast to previous results, it is possible that in the long term both languages may coexist and survive. This happens only where there is a stable bilingual group, and this is possible only if the competing languages are sufficiently similar, in which case its occurrence is favoured by both similarity and status symmetry.

  12. The importance of interlinguistic similarity and stable bilingualism when two languages compete

    Energy Technology Data Exchange (ETDEWEB)

    Mira, J; Seoane, L F [Departamento de Fisica Aplicada, Universidade de Santiago de Compostela, 15782 Santiago de Compostela (Spain); Nieto, J J, E-mail: jorge.mira@usc.es [Departamento de Analise Matematica and Instituto de Matematicas, Universidade de Santiago de Compostela, 15782 Santiago de Compostela (Spain)

    2011-03-15

    One approach for analyzing the dynamics of two languages in competition is to fit historical data for the number of speakers of each with a mathematical model in which the parameters are interpreted as the similarity between those languages and their relative status. Within this approach, on the basis of a detailed analysis and extensive calculations, we show the outcomes that can emerge for given values of these parameters. In contrast to previous results, it is possible that in the long term both languages may coexist and survive. This happens only where there is a stable bilingual group, and this is possible only if the competing languages are sufficiently similar, in which case its occurrence is favoured by both similarity and status symmetry.

  13. The importance of interlinguistic similarity and stable bilingualism when two languages compete

    Science.gov (United States)

    Mira, J.; Seoane, L. F.; Nieto, J. J.

    2011-03-01

    One approach for analyzing the dynamics of two languages in competition is to fit historical data for the number of speakers of each with a mathematical model in which the parameters are interpreted as the similarity between those languages and their relative status. Within this approach, on the basis of a detailed analysis and extensive calculations, we show the outcomes that can emerge for given values of these parameters. In contrast to previous results, it is possible that in the long term both languages may coexist and survive. This happens only where there is a stable bilingual group, and this is possible only if the competing languages are sufficiently similar, in which case its occurrence is favoured by both similarity and status symmetry.

  14. Assessing semantic similarity of texts - Methods and algorithms

    Science.gov (United States)

    Rozeva, Anna; Zerkova, Silvia

    2017-12-01

    Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.

  15. [New approaches in pharmacology: numerical modelling and simulation].

    Science.gov (United States)

    Boissel, Jean-Pierre; Cucherat, Michel; Nony, Patrice; Dronne, Marie-Aimée; Kassaï, Behrouz; Chabaud, Sylvie

    2005-01-01

    The complexity of pathophysiological mechanisms is beyond the capabilities of traditional approaches. Many of the decision-making problems in public health, such as initiating mass screening, are complex. Progress in genomics and proteomics, and the resulting extraordinary increase in knowledge with regard to interactions between gene expression, the environment and behaviour, the customisation of risk factors and the need to combine therapies that individually have minimal though well documented efficacy, has led doctors to raise new questions: how to optimise choice and the application of therapeutic strategies at the individual rather than the group level, while taking into account all the available evidence? This is essentially a problem of complexity with dimensions similar to the previous ones: multiple parameters with nonlinear relationships between them, varying time scales that cannot be ignored etc. Numerical modelling and simulation (in silico investigations) have the potential to meet these challenges. Such approaches are considered in drug innovation and development. They require a multidisciplinary approach, and this will involve modification of the way research in pharmacology is conducted.

  16. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  17. A novel approach to multihazard modeling and simulation.

    Science.gov (United States)

    Smith, Silas W; Portelli, Ian; Narzisi, Giuseppe; Nelson, Lewis S; Menges, Fabian; Rekow, E Dianne; Mincer, Joshua S; Mishra, Bhubaneswar; Goldfrank, Lewis R

    2009-06-01

    To develop and apply a novel modeling approach to support medical and public health disaster planning and response using a sarin release scenario in a metropolitan environment. An agent-based disaster simulation model was developed incorporating the principles of dose response, surge response, and psychosocial characteristics superimposed on topographically accurate geographic information system architecture. The modeling scenarios involved passive and active releases of sarin in multiple transportation hubs in a metropolitan city. Parameters evaluated included emergency medical services, hospital surge capacity (including implementation of disaster plan), and behavioral and psychosocial characteristics of the victims. In passive sarin release scenarios of 5 to 15 L, mortality increased nonlinearly from 0.13% to 8.69%, reaching 55.4% with active dispersion, reflecting higher initial doses. Cumulative mortality rates from releases in 1 to 3 major transportation hubs similarly increased nonlinearly as a function of dose and systemic stress. The increase in mortality rate was most pronounced in the 80% to 100% emergency department occupancy range, analogous to the previously observed queuing phenomenon. Effective implementation of hospital disaster plans decreased mortality and injury severity. Decreasing ambulance response time and increasing available responding units reduced mortality among potentially salvageable patients. Adverse psychosocial characteristics (excess worry and low compliance) increased demands on health care resources. Transfer to alternative urban sites was possible. An agent-based modeling approach provides a mechanism to assess complex individual and systemwide effects in rare events.

  18. Detecting earthquakes over a seismic network using single-station similarity measures

    Science.gov (United States)

    Bergen, Karianne J.; Beroza, Gregory C.

    2018-06-01

    New blind waveform-similarity-based detection methods, such as Fingerprint and Similarity Thresholding (FAST), have shown promise for detecting weak signals in long-duration, continuous waveform data. While blind detectors are capable of identifying similar or repeating waveforms without templates, they can also be susceptible to false detections due to local correlated noise. In this work, we present a set of three new methods that allow us to extend single-station similarity-based detection over a seismic network; event-pair extraction, pairwise pseudo-association, and event resolution complete a post-processing pipeline that combines single-station similarity measures (e.g. FAST sparse similarity matrix) from each station in a network into a list of candidate events. The core technique, pairwise pseudo-association, leverages the pairwise structure of event detections in its network detection model, which allows it to identify events observed at multiple stations in the network without modeling the expected moveout. Though our approach is general, we apply it to extend FAST over a sparse seismic network. We demonstrate that our network-based extension of FAST is both sensitive and maintains a low false detection rate. As a test case, we apply our approach to 2 weeks of continuous waveform data from five stations during the foreshock sequence prior to the 2014 Mw 8.2 Iquique earthquake. Our method identifies nearly five times as many events as the local seismicity catalogue (including 95 per cent of the catalogue events), and less than 1 per cent of these candidate events are false detections.

  19. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  20. Measuring transferring similarity via local information

    Science.gov (United States)

    Yin, Likang; Deng, Yong

    2018-05-01

    Recommender systems have developed along with the web science, and how to measure the similarity between users is crucial for processing collaborative filtering recommendation. Many efficient models have been proposed (i.g., the Pearson coefficient) to measure the direct correlation. However, the direct correlation measures are greatly affected by the sparsity of dataset. In other words, the direct correlation measures would present an inauthentic similarity if two users have a very few commonly selected objects. Transferring similarity overcomes this drawback by considering their common neighbors (i.e., the intermediates). Yet, the transferring similarity also has its drawback since it can only provide the interval of similarity. To break the limitations, we propose the Belief Transferring Similarity (BTS) model. The contributions of BTS model are: (1) BTS model addresses the issue of the sparsity of dataset by considering the high-order similarity. (2) BTS model transforms uncertain interval to a certain state based on fuzzy systems theory. (3) BTS model is able to combine the transferring similarity of different intermediates using information fusion method. Finally, we compare BTS models with nine different link prediction methods in nine different networks, and we also illustrate the convergence property and efficiency of the BTS model.

  1. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  2. On the scale similarity in large eddy simulation. A proposal of a new model

    International Nuclear Information System (INIS)

    Pasero, E.; Cannata, G.; Gallerano, F.

    2004-01-01

    Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The

  3. How similar are nut-cracking and stone-flaking? A functional approach to percussive technology.

    Science.gov (United States)

    Bril, Blandine; Parry, Ross; Dietrich, Gilles

    2015-11-19

    Various authors have suggested similarities between tool use in early hominins and chimpanzees. This has been particularly evident in studies of nut-cracking which is considered to be the most complex skill exhibited by wild apes, and has also been interpreted as a precursor of more complex stone-flaking abilities. It has been argued that there is no major qualitative difference between what the chimpanzee does when he cracks a nut and what early hominins did when they detached a flake from a core. In this paper, similarities and differences between skills involved in stone-flaking and nut-cracking are explored through an experimental protocol with human subjects performing both tasks. We suggest that a 'functional' approach to percussive action, based on the distinction between functional parameters that characterize each task and parameters that characterize the agent's actions and movements, is a fruitful method for understanding those constraints which need to be mastered to perform each task successfully, and subsequently, the nature of skill involved in both tasks. © 2015 The Author(s).

  4. Decision support models for solid waste management: Review and game-theoretic approaches

    International Nuclear Information System (INIS)

    Karmperis, Athanasios C.; Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios

    2013-01-01

    Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed

  5. Decision support models for solid waste management: Review and game-theoretic approaches

    Energy Technology Data Exchange (ETDEWEB)

    Karmperis, Athanasios C., E-mail: athkarmp@mail.ntua.gr [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece); Army Corps of Engineers, Hellenic Army General Staff, Ministry of Defence (Greece); Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece)

    2013-05-15

    Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed.

  6. An application of superpositions of two-state Markovian sources to the modelling of self-similar behaviour

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis

    1997-01-01

    We present a modelling framework and a fitting method for modelling second order self-similar behaviour with the Markovian arrival process (MAP). The fitting method is based on fitting to the autocorrelation function of counts a second order self-similar process. It is shown that with this fittin...

  7. A Similarity Analysis of Audio Signal to Develop a Human Activity Recognition Using Similarity Networks

    Directory of Open Access Journals (Sweden)

    Alejandra García-Hernández

    2017-11-01

    Full Text Available Human Activity Recognition (HAR is one of the main subjects of study in the areas of computer vision and machine learning due to the great benefits that can be achieved. Examples of the study areas are: health prevention, security and surveillance, automotive research, and many others. The proposed approaches are carried out using machine learning techniques and present good results. However, it is difficult to observe how the descriptors of human activities are grouped. In order to obtain a better understanding of the the behavior of descriptors, it is important to improve the abilities to recognize the human activities. This paper proposes a novel approach for the HAR based on acoustic data and similarity networks. In this approach, we were able to characterize the sound of the activities and identify those activities looking for similarity in the sound pattern. We evaluated the similarity of the sounds considering mainly two features: the sound location and the materials that were used. As a result, the materials are a good reference classifying the human activities compared with the location.

  8. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  9. Fast Schemes for Computing Similarities between Gaussian HMMs and Their Applications in Texture Image Classification

    Directory of Open Access Journals (Sweden)

    Chen Ling

    2005-01-01

    Full Text Available An appropriate definition and efficient computation of similarity (or distance measures between two stochastic models are of theoretical and practical interest. In this work, a similarity measure, that is, a modified "generalized probability product kernel," of Gaussian hidden Markov models is introduced. Two efficient schemes for computing this similarity measure are presented. The first scheme adopts a forward procedure analogous to the approach commonly used in probability evaluation of observation sequences on HMMs. The second scheme is based on the specially defined similarity transition matrix of two Gaussian hidden Markov models. Two scaling procedures are also proposed to solve the out-of-precision problem in the implementation. The effectiveness of the proposed methods has been evaluated on simulated observations with predefined model parameters, and on natural texture images. Promising experimental results have been observed.

  10. A Multi-Model Approach for System Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur

    2007-01-01

    A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....

  11. Link-Based Similarity Measures Using Reachability Vectors

    Directory of Open Access Journals (Sweden)

    Seok-Ho Yoon

    2014-01-01

    Full Text Available We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures.

  12. Walking on a user similarity network towards personalized recommendations.

    Directory of Open Access Journals (Sweden)

    Mingxin Gan

    Full Text Available Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance.

  13. Walking on a user similarity network towards personalized recommendations.

    Science.gov (United States)

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance.

  14. Training of Tonal Similarity Ratings in Non-Musicians: A “Rapid Learning” Approach

    Science.gov (United States)

    Oechslin, Mathias S.; Läge, Damian; Vitouch, Oliver

    2012-01-01

    Although cognitive music psychology has a long tradition of expert–novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based “rapid learning” paradigm, participants had to decide for single tones and chords whether paired sounds matched each other well. Before and after the training sessions, they provided similarity judgments for a complete set of sound pairs. From these similarity matrices, individual relational sound maps, intended to display mental representations, were calculated by means of non-metric multidimensional scaling (NMDS), and were compared to an expert model through procrustean transformation. Approximately half of the novices showed substantial learning success, with some participants even reaching the level of professional musicians. Results speak for a fundamental ability to quickly train an understanding of harmony, show inter-individual differences in learning success, and demonstrate the suitability of the scaling method used for learning research in music and other domains. Results are discussed in the context of the “giftedness” debate. PMID:22629252

  15. A New Trajectory Similarity Measure for GPS Data

    KAUST Repository

    Ismail, Anas; Vigneron, Antoine E.

    2016-01-01

    We present a new algorithm for measuring the similarity between trajectories, and in particular between GPS traces. We call this new similarity measure the Merge Distance (MD). Our approach is robust against subsampling and supersampling. We perform experiments to compare this new similarity measure with the two main approaches that have been used so far: Dynamic Time Warping (DTW) and the Euclidean distance. © 2015 ACM.

  16. A New Trajectory Similarity Measure for GPS Data

    KAUST Repository

    Ismail, Anas

    2016-08-08

    We present a new algorithm for measuring the similarity between trajectories, and in particular between GPS traces. We call this new similarity measure the Merge Distance (MD). Our approach is robust against subsampling and supersampling. We perform experiments to compare this new similarity measure with the two main approaches that have been used so far: Dynamic Time Warping (DTW) and the Euclidean distance. © 2015 ACM.

  17. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  18. A fuzzy logic approach to modeling the underground economy in Taiwan

    Science.gov (United States)

    Yu, Tiffany Hui-Kuang; Wang, David Han-Min; Chen, Su-Jane

    2006-04-01

    The size of the ‘underground economy’ (UE) is valuable information in the formulation of macroeconomic and fiscal policy. This study applies fuzzy set theory and fuzzy logic to model Taiwan's UE over the period from 1960 to 2003. Two major factors affecting the size of the UE, the effective tax rate and the degree of government regulation, are used. The size of Taiwan's UE is scaled and compared with those of other models. Although our approach yields different estimates, similar patterns and leading are exhibited throughout the period. The advantage of applying fuzzy logic is twofold. First, it can avoid the complex calculations in conventional econometric models. Second, fuzzy rules with linguistic terms are easy for human to understand.

  19. Modeling the kinetics of hydrates formation using phase field method under similar conditions of petroleum pipelines; Modelagem da cinetica de formacao de hidratos utilizando o Modelo do Campo de Fase em condicoes similares a dutos de petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mabelle Biancardi; Castro, Jose Adilson de; Silva, Alexandre Jose da [Universidade Federal Fluminense (UFF), Volta Redonda, RJ (Brazil). Programa de Pos-Graduacao em Engenharia Metalurgica], e-mails: mabelle@metal.eeimvr.uff.br; adilson@metal.eeimvr.uff.br; ajs@metal.eeimvr.uff.br

    2008-10-15

    Natural hydrates are crystalline compounds that are ice-like formed under oil extraction transportation and processing. This paper deals with the kinetics of hydrate formation by using the phase field approach coupled with the transport equation of energy. The kinetic parameters of the hydrate formation were obtained by adjusting the proposed model to experimental results in similar conditions of oil extraction. The effect of thermal and nucleation conditions were investigated while the rate of formation and morphology were obtained by numerical computation. Model results of kinetics growth and morphology presented good agreement with the experimental ones. Simulation results indicated that super-cooling and pressure were decisive parameters for hydrates growth, morphology and interface thickness. (author)

  20. Towards a chromatographic similarity index to establish localised quantitative structure-retention relationships for retention prediction. II Use of Tanimoto similarity index in ion chromatography.

    Science.gov (United States)

    Park, Soo Hyun; Talebi, Mohammad; Amos, Ruth I J; Tyteca, Eva; Haddad, Paul R; Szucs, Roman; Pohl, Christopher A; Dolan, John W

    2017-11-10

    Quantitative Structure-Retention Relationships (QSRR) are used to predict retention times of compounds based only on their chemical structures encoded by molecular descriptors. The main concern in QSRR modelling is to build models with high predictive power, allowing reliable retention prediction for the unknown compounds across the chromatographic space. With the aim of enhancing the prediction power of the models, in this work, our previously proposed QSRR modelling approach called "federation of local models" is extended in ion chromatography to predict retention times of unknown ions, where a local model for each target ion (unknown) is created using only structurally similar ions from the dataset. A Tanimoto similarity (TS) score was utilised as a measure of structural similarity and training sets were developed by including ions that were similar to the target ion, as defined by a threshold value. The prediction of retention parameters (a- and b-values) in the linear solvent strength (LSS) model in ion chromatography, log k=a - blog[eluent], allows the prediction of retention times under all eluent concentrations. The QSRR models for a- and b-values were developed by a genetic algorithm-partial least squares method using the retention data of inorganic and small organic anions and larger organic cations (molecular mass up to 507) on four Thermo Fisher Scientific columns (AS20, AS19, AS11HC and CS17). The corresponding predicted retention times were calculated by fitting the predicted a- and b-values of the models into the LSS model equation. The predicted retention times were also plotted against the experimental values to evaluate the goodness of fit and the predictive power of the models. The application of a TS threshold of 0.6 was found to successfully produce predictive and reliable QSRR models (Q ext(F2) 2 >0.8 and Mean Absolute Error<0.1), and hence accurate retention time predictions with an average Mean Absolute Error of 0.2min. Crown Copyright

  1. [-25]A Similarity Analysis of Audio Signal to Develop a Human Activity Recognition Using Similarity Networks.

    Science.gov (United States)

    García-Hernández, Alejandra; Galván-Tejada, Carlos E; Galván-Tejada, Jorge I; Celaya-Padilla, José M; Gamboa-Rosales, Hamurabi; Velasco-Elizondo, Perla; Cárdenas-Vargas, Rogelio

    2017-11-21

    Human Activity Recognition (HAR) is one of the main subjects of study in the areas of computer vision and machine learning due to the great benefits that can be achieved. Examples of the study areas are: health prevention, security and surveillance, automotive research, and many others. The proposed approaches are carried out using machine learning techniques and present good results. However, it is difficult to observe how the descriptors of human activities are grouped. In order to obtain a better understanding of the the behavior of descriptors, it is important to improve the abilities to recognize the human activities. This paper proposes a novel approach for the HAR based on acoustic data and similarity networks. In this approach, we were able to characterize the sound of the activities and identify those activities looking for similarity in the sound pattern. We evaluated the similarity of the sounds considering mainly two features: the sound location and the materials that were used. As a result, the materials are a good reference classifying the human activities compared with the location.

  2. Enhancing Media Personalization by Extracting Similarity Knowledge from Metadata

    DEFF Research Database (Denmark)

    Butkus, Andrius

    be seen as a cognitive foundation for modeling concepts. Conceptual Spaces is applied in this thesis to analyze media in terms of its dimensions and knowledge domains, which in return defines properties and concepts. One of the most important domains in terms of describing media is the emotional one...... only “more of the same” type of content which does not necessarily lead to the meaningful personalization. Another way to approach similarity is to find a similar underlying meaning in the content. Aspects of meaning in media can be represented using Gardenfors Conceptual Spaces theory, which can......) using Latent Semantic Analysis (one of the unsupervised machine learning techniques). It presents three separate cases to illustrate the similarity knowledge extraction from the metadata, where the emotional components in each case represents different abstraction levels – genres, synopsis and lyrics...

  3. Graphene growth process modeling: a physical-statistical approach

    Science.gov (United States)

    Wu, Jian; Huang, Qiang

    2014-09-01

    As a zero-band semiconductor, graphene is an attractive material for a wide variety of applications such as optoelectronics. Among various techniques developed for graphene synthesis, chemical vapor deposition on copper foils shows high potential for producing few-layer and large-area graphene. Since fabrication of high-quality graphene sheets requires the understanding of growth mechanisms, and methods of characterization and control of grain size of graphene flakes, analytical modeling of graphene growth process is therefore essential for controlled fabrication. The graphene growth process starts with randomly nucleated islands that gradually develop into complex shapes, grow in size, and eventually connect together to cover the copper foil. To model this complex process, we develop a physical-statistical approach under the assumption of self-similarity during graphene growth. The growth kinetics is uncovered by separating island shapes from area growth rate. We propose to characterize the area growth velocity using a confined exponential model, which not only has clear physical explanation, but also fits the real data well. For the shape modeling, we develop a parametric shape model which can be well explained by the angular-dependent growth rate. This work can provide useful information for the control and optimization of graphene growth process on Cu foil.

  4. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  5. Testing process predictions of models of risky choice: a quantitative model comparison approach

    Science.gov (United States)

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  6. Testing Process Predictions of Models of Risky Choice: A Quantitative Model Comparison Approach

    Directory of Open Access Journals (Sweden)

    Thorsten ePachur

    2013-09-01

    Full Text Available This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or nonlinear functions thereof and the separate evaluation of risky options (expectation models. Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models. We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter, Gigerenzer, & Hertwig, 2006, and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up and direction of search (i.e., gamble-wise vs. reason-wise. In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly; acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988 called similarity. In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  7. The Hyper-Envelope Modeling Interface (HEMI): A Novel Approach Illustrated Through Predicting Tamarisk (Tamarix spp.) Habitat in the Western USA

    Science.gov (United States)

    Graham, Jim; Young, Nick; Jarnevich, Catherine S.; Newman, Greg; Evangelista, Paul; Stohlgren, Thomas J.

    2013-01-01

    Habitat suitability maps are commonly created by modeling a species’ environmental niche from occurrences and environmental characteristics. Here, we introduce the hyper-envelope modeling interface (HEMI), providing a new method for creating habitat suitability models using Bezier surfaces to model a species niche in environmental space. HEMI allows modeled surfaces to be visualized and edited in environmental space based on expert knowledge and does not require absence points for model development. The modeled surfaces require relatively few parameters compared to similar modeling approaches and may produce models that better match ecological niche theory. As a case study, we modeled the invasive species tamarisk (Tamarix spp.) in the western USA. We compare results from HEMI with those from existing similar modeling approaches (including BioClim, BioMapper, and Maxent). We used synthetic surfaces to create visualizations of the various models in environmental space and used modified area under the curve (AUC) statistic and akaike information criterion (AIC) as measures of model performance. We show that HEMI produced slightly better AUC values, except for Maxent and better AIC values overall. HEMI created a model with only ten parameters while Maxent produced a model with over 100 and BioClim used only eight. Additionally, HEMI allowed visualization and editing of the model in environmental space to develop alternative potential habitat scenarios. The use of Bezier surfaces can provide simple models that match our expectations of biological niche models and, at least in some cases, out-perform more complex approaches.

  8. The Hyper-Envelope Modeling Interface (HEMI): A Novel Approach Illustrated Through Predicting Tamarisk ( Tamarix spp.) Habitat in the Western USA

    Science.gov (United States)

    Graham, Jim; Young, Nick; Jarnevich, Catherine S.; Newman, Greg; Evangelista, Paul; Stohlgren, Thomas J.

    2013-10-01

    Habitat suitability maps are commonly created by modeling a species' environmental niche from occurrences and environmental characteristics. Here, we introduce the hyper-envelope modeling interface (HEMI), providing a new method for creating habitat suitability models using Bezier surfaces to model a species niche in environmental space. HEMI allows modeled surfaces to be visualized and edited in environmental space based on expert knowledge and does not require absence points for model development. The modeled surfaces require relatively few parameters compared to similar modeling approaches and may produce models that better match ecological niche theory. As a case study, we modeled the invasive species tamarisk ( Tamarix spp.) in the western USA. We compare results from HEMI with those from existing similar modeling approaches (including BioClim, BioMapper, and Maxent). We used synthetic surfaces to create visualizations of the various models in environmental space and used modified area under the curve (AUC) statistic and akaike information criterion (AIC) as measures of model performance. We show that HEMI produced slightly better AUC values, except for Maxent and better AIC values overall. HEMI created a model with only ten parameters while Maxent produced a model with over 100 and BioClim used only eight. Additionally, HEMI allowed visualization and editing of the model in environmental space to develop alternative potential habitat scenarios. The use of Bezier surfaces can provide simple models that match our expectations of biological niche models and, at least in some cases, out-perform more complex approaches.

  9. Clarkson-Kruskal Direct Similarity Approach for Differential-Difference Equations

    Institute of Scientific and Technical Information of China (English)

    SHEN Shou-Feng

    2005-01-01

    In this letter, the Clarkson-Kruskal direct method is extended to similarity reduce some differentialdifference equations. As examples, the differential-difference KZ equation and KP equation are considered.

  10. Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.

    Science.gov (United States)

    Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J

    2016-01-01

    Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  11. Computational prediction of drug-drug interactions based on drugs functional similarities.

    Science.gov (United States)

    Ferdousi, Reza; Safdari, Reza; Omidi, Yadollah

    2017-06-01

    Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs. Copyright © 2017. Published by Elsevier Inc.

  12. Studies of a general flat space/boson star transition model in a box through a language similar to holographic superconductors

    Science.gov (United States)

    Peng, Yan

    2017-07-01

    We study a general flat space/boson star transition model in quasi-local ensemble through approaches familiar from holographic superconductor theories. We manage to find a parameter ψ 2, which is proved to be useful in disclosing properties of phase transitions. In this work, we explore effects of the scalar mass, scalar charge and Stückelberg mechanism on the critical phase transition points and the order of transitions mainly from behaviors of the parameter ψ 2. We mention that properties of transitions in quasi-local gravity are strikingly similar to those in holographic superconductor models. We also obtain an analytical relation ψ 2 ∝ ( μ - μ c )1/2, which also holds for the condensed scalar operator in the holographic insulator/superconductor system in accordance with mean field theories.

  13. Reconstructing plateau icefields: Evaluating empirical and modelled approaches

    Science.gov (United States)

    Pearce, Danni; Rea, Brice; Barr, Iestyn

    2013-04-01

    Glacial landforms are widely utilised to reconstruct former glacier geometries with a common aim to estimate the Equilibrium Line Altitudes (ELAs) and from these, infer palaeoclimatic conditions. Such inferences may be studied on a regional scale and used to correlate climatic gradients across large distances (e.g., Europe). In Britain, the traditional approach uses geomorphological mapping with hand contouring to derive the palaeo-ice surface. Recently, ice surface modelling enables an equilibrium profile reconstruction tuned using the geomorphology. Both methods permit derivation of palaeo-climate but no study has compared the two methods for the same ice-mass. This is important because either approach may result in differences in glacier limits, ELAs and palaeo-climate. This research uses both methods to reconstruct a plateau icefield and quantifies the results from a cartographic and geometrical aspect. Detailed geomorphological mapping of the Tweedsmuir Hills in the Southern Uplands, Scotland (c. 320 km2) was conducted to examine the extent of Younger Dryas (YD; 12.9 -11.7 cal. ka BP) glaciation. Landform evidence indicates a plateau icefield configuration of two separate ice-masses during the YD covering an area c. 45 km2 and 25 km2. The interpreted age is supported by new radiocarbon dating of basal stratigraphies and Terrestrial Cosmogenic Nuclide Analysis (TCNA) of in situ boulders. Both techniques produce similar configurations however; the model results in a coarser resolution requiring further processing if a cartographic map is required. When landforms are absent or fragmentary (e.g., trimlines and lateral moraines), like in many accumulation zones on plateau icefields, the geomorphological approach increasingly relies on extrapolation between lines of evidence and on the individual's perception of how the ice-mass ought to look. In some locations this results in an underestimation of the ice surface compared to the modelled surface most likely due to

  14. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  15. HEDR modeling approach

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1992-07-01

    This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered

  16. A study on the quantitative model of human response time using the amount and the similarity of information

    International Nuclear Information System (INIS)

    Lee, Sung Jin

    2006-02-01

    The mental capacity to retain or recall information, or memory is related to human performance during processing of information. Although a large number of studies have been carried out on human performance, little is known about the similarity effect. The purpose of this study was to propose and validate a quantitative and predictive model on human response time in the user interface with the basic concepts of information amount, similarity and degree of practice. It was difficult to explain human performance by only similarity or information amount. There were two difficulties: constructing a quantitative model on human response time and validating the proposed model by experimental work. A quantitative model based on the Hick's law, the law of practice and similarity theory was developed. The model was validated under various experimental conditions by measuring the participants' response time in the environment of a computer-based display. Human performance was improved by degree of similarity and practice in the user interface. Also we found the age-related human performance which was degraded as he or she was more elder. The proposed model may be useful for training operators who will handle some interfaces and predicting human performance by changing system design

  17. Similarity analyses of chromatographic herbal fingerprints: A review

    International Nuclear Information System (INIS)

    Goodarzi, Mohammad; Russell, Paul J.; Vander Heyden, Yvan

    2013-01-01

    Graphical abstract: -- Highlights: •Similarity analyses of herbal fingerprints are reviewed. •Different (dis)similarity approaches are discussed. •(Dis)similarity-metrics and exploratory-analysis approaches are illustrated. •Correlation and distance-based measures are overviewed. •Similarity analyses illustrated by several case studies. -- Abstract: Herbal medicines are becoming again more popular in the developed countries because being “natural” and people thus often assume that they are inherently safe. Herbs have also been used worldwide for many centuries in the traditional medicines. The concern of their safety and efficacy has grown since increasing western interest. Herbal materials and their extracts are very complex, often including hundreds of compounds. A thorough understanding of their chemical composition is essential for conducting a safety risk assessment. However, herbal material can show considerable variability. The chemical constituents and their amounts in a herb can be different, due to growing conditions, such as climate and soil, the drying process, the harvest season, etc. Among the analytical methods, chromatographic fingerprinting has been recommended as a potential and reliable methodology for the identification and quality control of herbal medicines. Identification is needed to avoid fraud and adulteration. Currently, analyzing chromatographic herbal fingerprint data sets has become one of the most applied tools in quality assessment of herbal materials. Mostly, the entire chromatographic profiles are used to identify or to evaluate the quality of the herbs investigated. Occasionally only a limited number of compounds are considered. One approach to the safety risk assessment is to determine whether the herbal material is substantially equivalent to that which is either readily consumed in the diet, has a history of application or has earlier been commercialized i.e. to what is considered as reference material. In order

  18. Similarity analyses of chromatographic herbal fingerprints: A review

    Energy Technology Data Exchange (ETDEWEB)

    Goodarzi, Mohammad [Department of Analytical Chemistry and Pharmaceutical Technology, Center for Pharmaceutical Research, Vrije Universiteit Brussel, Laarbeeklaan 103, B-1090 Brussels (Belgium); Russell, Paul J. [Safety and Environmental Assurance Centre, Unilever, Colworth Science Park, Sharnbrook, Bedfordshire MK44 1LQ (United Kingdom); Vander Heyden, Yvan, E-mail: yvanvdh@vub.ac.be [Department of Analytical Chemistry and Pharmaceutical Technology, Center for Pharmaceutical Research, Vrije Universiteit Brussel, Laarbeeklaan 103, B-1090 Brussels (Belgium)

    2013-12-04

    Graphical abstract: -- Highlights: •Similarity analyses of herbal fingerprints are reviewed. •Different (dis)similarity approaches are discussed. •(Dis)similarity-metrics and exploratory-analysis approaches are illustrated. •Correlation and distance-based measures are overviewed. •Similarity analyses illustrated by several case studies. -- Abstract: Herbal medicines are becoming again more popular in the developed countries because being “natural” and people thus often assume that they are inherently safe. Herbs have also been used worldwide for many centuries in the traditional medicines. The concern of their safety and efficacy has grown since increasing western interest. Herbal materials and their extracts are very complex, often including hundreds of compounds. A thorough understanding of their chemical composition is essential for conducting a safety risk assessment. However, herbal material can show considerable variability. The chemical constituents and their amounts in a herb can be different, due to growing conditions, such as climate and soil, the drying process, the harvest season, etc. Among the analytical methods, chromatographic fingerprinting has been recommended as a potential and reliable methodology for the identification and quality control of herbal medicines. Identification is needed to avoid fraud and adulteration. Currently, analyzing chromatographic herbal fingerprint data sets has become one of the most applied tools in quality assessment of herbal materials. Mostly, the entire chromatographic profiles are used to identify or to evaluate the quality of the herbs investigated. Occasionally only a limited number of compounds are considered. One approach to the safety risk assessment is to determine whether the herbal material is substantially equivalent to that which is either readily consumed in the diet, has a history of application or has earlier been commercialized i.e. to what is considered as reference material. In order

  19. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    Energy Technology Data Exchange (ETDEWEB)

    Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-11

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.

  20. Spatial pattern evaluation of a calibrated national hydrological model - a remote-sensing-based diagnostic approach

    Science.gov (United States)

    Mendiguren, Gorka; Koch, Julian; Stisen, Simon

    2017-11-01

    Distributed hydrological models are traditionally evaluated against discharge stations, emphasizing the temporal and neglecting the spatial component of a model. The present study widens the traditional paradigm by highlighting spatial patterns of evapotranspiration (ET), a key variable at the land-atmosphere interface, obtained from two different approaches at the national scale of Denmark. The first approach is based on a national water resources model (DK-model), using the MIKE-SHE model code, and the second approach utilizes a two-source energy balance model (TSEB) driven mainly by satellite remote sensing data. Ideally, the hydrological model simulation and remote-sensing-based approach should present similar spatial patterns and driving mechanisms of ET. However, the spatial comparison showed that the differences are significant and indicate insufficient spatial pattern performance of the hydrological model.The differences in spatial patterns can partly be explained by the fact that the hydrological model is configured to run in six domains that are calibrated independently from each other, as it is often the case for large-scale multi-basin calibrations. Furthermore, the model incorporates predefined temporal dynamics of leaf area index (LAI), root depth (RD) and crop coefficient (Kc) for each land cover type. This zonal approach of model parameterization ignores the spatiotemporal complexity of the natural system. To overcome this limitation, this study features a modified version of the DK-model in which LAI, RD and Kc are empirically derived using remote sensing data and detailed soil property maps in order to generate a higher degree of spatiotemporal variability and spatial consistency between the six domains. The effects of these changes are analyzed by using empirical orthogonal function (EOF) analysis to evaluate spatial patterns. The EOF analysis shows that including remote-sensing-derived LAI, RD and Kc in the distributed hydrological model adds

  1. Application of various FLD modelling approaches

    Science.gov (United States)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  2. Self-similar pattern formation and continuous mechanics of self-similar systems

    Directory of Open Access Journals (Sweden)

    A. V. Dyskin

    2007-01-01

    Full Text Available In many cases, the critical state of systems that reached the threshold is characterised by self-similar pattern formation. We produce an example of pattern formation of this kind – formation of self-similar distribution of interacting fractures. Their formation starts with the crack growth due to the action of stress fluctuations. It is shown that even when the fluctuations have zero average the cracks generated by them could grow far beyond the scale of stress fluctuations. Further development of the fracture system is controlled by crack interaction leading to the emergence of self-similar crack distributions. As a result, the medium with fractures becomes discontinuous at any scale. We develop a continuum fractal mechanics to model its physical behaviour. We introduce a continuous sequence of continua of increasing scales covering this range of scales. The continuum of each scale is specified by the representative averaging volume elements of the corresponding size. These elements determine the resolution of the continuum. Each continuum hides the cracks of scales smaller than the volume element size while larger fractures are modelled explicitly. Using the developed formalism we investigate the stability of self-similar crack distributions with respect to crack growth and show that while the self-similar distribution of isotropically oriented cracks is stable, the distribution of parallel cracks is not. For the isotropically oriented cracks scaling of permeability is determined. For permeable materials (rocks with self-similar crack distributions permeability scales as cube of crack radius. This property could be used for detecting this specific mechanism of formation of self-similar crack distributions.

  3. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  4. Technical note: Comparison of methane ebullition modelling approaches used in terrestrial wetland models

    Science.gov (United States)

    Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo

    2018-02-01

    Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.

  5. Development of CANDU prototype fuel handling simulator - concept and some simulation results with physical network modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Xu, X.P. [Candu Energy Inc, Mississauga, Ontario (Canada)

    2012-07-01

    This paper reviewed the need for a fuel handling(FH) simulator in training operators and maintenance personnel, in FH design enhancement based on operating experience (OPEX), and the potential application of Virtual Reality (VR) based simulation technology. Modeling and simulation of the fuelling machine (FM) magazine drive plant (one of the CANDU FH sub-systems) was described. The work established the feasibility of modeling and simulating a physical FH drive system using the physical network approach and computer software tools. The concept and approach can be applied similarly to create the other FH subsystem plant models, which are expected to be integrated with control modules to develop a master FH control model and further to create a virtual FH system. (author)

  6. Development of CANDU prototype fuel handling simulator - concept and some simulation results with physical network modeling approach

    International Nuclear Information System (INIS)

    Xu, X.P.

    2012-01-01

    This paper reviewed the need for a fuel handling(FH) simulator in training operators and maintenance personnel, in FH design enhancement based on operating experience (OPEX), and the potential application of Virtual Reality (VR) based simulation technology. Modeling and simulation of the fuelling machine (FM) magazine drive plant (one of the CANDU FH sub-systems) was described. The work established the feasibility of modeling and simulating a physical FH drive system using the physical network approach and computer software tools. The concept and approach can be applied similarly to create the other FH subsystem plant models, which are expected to be integrated with control modules to develop a master FH control model and further to create a virtual FH system. (author)

  7. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    OF FIGURES Spiral Model .................................................................................................3 Figure 1. Approaches in... spiral model was chosen for researching and structuring this thesis, shown in Figure 1. This approach allowed multiple iterations of source material...applications and refining through iteration. 3 Spiral Model Figure 1. D. SCOPE The research is limited to a literature review, limited

  8. Similarity search of business process models

    NARCIS (Netherlands)

    Dumas, M.; García-Bañuelos, L.; Dijkman, R.M.

    2009-01-01

    Similarity search is a general class of problems in which a given object, called a query object, is compared against a collection of objects in order to retrieve those that most closely resemble the query object. This paper reviews recent work on an instance of this class of problems, where the

  9. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  10. Matrix approach to the Shapley value and dual similar associated consistency

    NARCIS (Netherlands)

    Xu, G.; Driessen, Theo

    Replacing associated consistency in Hamiache's axiom system by dual similar associated consistency, we axiomatize the Shapley value as the unique value verifying the inessential game property, continuity and dual similar associated consistency. Continuing the matrix analysis for Hamiache's

  11. A self-similar model for conduction in the plasma erosion opening switch

    International Nuclear Information System (INIS)

    Mosher, D.; Grossmann, J.M.; Ottinger, P.F.; Colombant, D.G.

    1987-01-01

    The conduction phase of the plasma erosion opening switch (PEOS) is characterized by combining a 1-D fluid model for plasma hydrodynamics, Maxwell's equations, and a 2-D electron-orbit analysis. A self-similar approximation for the plasma and field variables permits analytic expressions for their space and time variations to be derived. It is shown that a combination of axial MHD compression and magnetic insulation of high-energy electrons emitted from the switch cathode can control the character of switch conduction. The analysis highlights the need to include additional phenomena for accurate fluid modeling of PEOS conduction

  12. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  13. Models of galaxies - The modal approach

    International Nuclear Information System (INIS)

    Lin, C.C.; Lowe, S.A.

    1990-01-01

    The general viability of the modal approach to the spiral structure in normal spirals and the barlike structure in certain barred spirals is discussed. The usefulness of the modal approach in the construction of models of such galaxies is examined, emphasizing the adoption of a model appropriate to observational data for both the spiral structure of a galaxy and its basic mass distribution. 44 refs

  14. Self-similar solution for coupled thermal electromagnetic model ...

    African Journals Online (AJOL)

    An investigation into the existence and uniqueness solution of self-similar solution for the coupled Maxwell and Pennes Bio-heat equations have been done. Criteria for existence and uniqueness of self-similar solution are revealed in the consequent theorems. Journal of the Nigerian Association of Mathematical Physics ...

  15. A Novel Relevance Feedback Approach Based on Similarity Measure Modification in an X-Ray Image Retrieval System Based on Fuzzy Representation Using Fuzzy Attributed Relational Graph

    Directory of Open Access Journals (Sweden)

    Hossien Pourghassem

    2011-04-01

    Full Text Available Relevance feedback approaches is used to improve the performance of content-based image retrieval systems. In this paper, a novel relevance feedback approach based on similarity measure modification in an X-ray image retrieval system based on fuzzy representation using fuzzy attributed relational graph (FARG is presented. In this approach, optimum weight of each feature in feature vector is calculated using similarity rate between query image and relevant and irrelevant images in user feedback. The calculated weight is used to tune fuzzy graph matching algorithm as a modifier parameter in similarity measure. The standard deviation of the retrieved image features is applied to calculate the optimum weight. The proposed image retrieval system uses a FARG for representation of images, a fuzzy matching graph algorithm as similarity measure and a semantic classifier based on merging scheme for determination of the search space in image database. To evaluate relevance feedback approach in the proposed system, a standard X-ray image database consisting of 10000 images in 57 classes is used. The improvement of the evaluation parameters shows proficiency and efficiency of the proposed system.

  16. Quasi-Similarity Model of Synthetic Jets

    Czech Academy of Sciences Publication Activity Database

    Tesař, Václav; Kordík, Jozef

    2009-01-01

    Roč. 149, č. 2 (2009), s. 255-265 ISSN 0924-4247 R&D Projects: GA AV ČR IAA200760705; GA ČR GA101/07/1499 Institutional research plan: CEZ:AV0Z20760514 Keywords : jets * synthetic jets * similarity solution Subject RIV: BK - Fluid Dynamics Impact factor: 1.674, year: 2009 http://www.sciencedirect.com

  17. Evaporator modeling - A hybrid approach

    International Nuclear Information System (INIS)

    Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun

    2009-01-01

    In this paper, a hybrid modeling approach is proposed to model two-phase flow evaporators. The main procedures for hybrid modeling includes: (1) Based on the energy and material balance, and thermodynamic principles to formulate the process fundamental governing equations; (2) Select input/output (I/O) variables responsible to the system performance which can be measured and controlled; (3) Represent those variables existing in the original equations but are not measurable as simple functions of selected I/Os or constants; (4) Obtaining a single equation which can correlate system inputs and outputs; and (5) Identify unknown parameters by linear or nonlinear least-squares methods. The method takes advantages of both physical and empirical modeling approaches and can accurately predict performance in wide operating range and in real-time, which can significantly reduce the computational burden and increase the prediction accuracy. The model is verified with the experimental data taken from a testing system. The testing results show that the proposed model can predict accurately the performance of the real-time operating evaporator with the maximum error of ±8%. The developed models will have wide applications in operational optimization, performance assessment, fault detection and diagnosis

  18. Identifying western yellow-billed cuckoo breeding habitat with a dual modelling approach

    Science.gov (United States)

    Johnson, Matthew J.; Hatten, James R.; Holmes, Jennifer A.; Shafroth, Patrick B.

    2017-01-01

    The western population of the yellow-billed cuckoo (Coccyzus americanus) was recently listed as threatened under the federal Endangered Species Act. Yellow-billed cuckoo conservation efforts require the identification of features and area requirements associated with high quality, riparian forest habitat at spatial scales that range from nest microhabitat to landscape, as well as lower-suitability areas that can be enhanced or restored. Spatially explicit models inform conservation efforts by increasing ecological understanding of a target species, especially at landscape scales. Previous yellow-billed cuckoo modelling efforts derived plant-community maps from aerial photography, an expensive and oftentimes inconsistent approach. Satellite models can remotely map vegetation features (e.g., vegetation density, heterogeneity in vegetation density or structure) across large areas with near perfect repeatability, but they usually cannot identify plant communities. We used aerial photos and satellite imagery, and a hierarchical spatial scale approach, to identify yellow-billed cuckoo breeding habitat along the Lower Colorado River and its tributaries. Aerial-photo and satellite models identified several key features associated with yellow-billed cuckoo breeding locations: (1) a 4.5 ha core area of dense cottonwood-willow vegetation, (2) a large native, heterogeneously dense forest (72 ha) around the core area, and (3) moderately rough topography. The odds of yellow-billed cuckoo occurrence decreased rapidly as the amount of tamarisk cover increased or when cottonwood-willow vegetation was limited. We achieved model accuracies of 75–80% in the project area the following year after updating the imagery and location data. The two model types had very similar probability maps, largely predicting the same areas as high quality habitat. While each model provided unique information, a dual-modelling approach provided a more complete picture of yellow-billed cuckoo habitat

  19. Stereotype content model across cultures: Towards universal similarities and some differences

    Science.gov (United States)

    Cuddy, Amy J. C.; Fiske, Susan T.; Kwan, Virginia S. Y.; Glick, Peter; Demoulin, Stéphanie; Leyens, Jacques-Philippe; Bond, Michael Harris; Croizet, Jean-Claude; Ellemers, Naomi; Sleebos, Ed; Htun, Tin Tin; Kim, Hyun-Jeong; Maio, Greg; Perry, Judi; Petkova, Kristina; Todorov, Valery; Rodríguez-Bailón, Rosa; Morales, Elena; Moya, Miguel; Palacios, Marisol; Smith, Vanessa; Perez, Rolando; Vala, Jorge; Ziegler, Rene

    2014-01-01

    The stereotype content model (SCM) proposes potentially universal principles of societal stereotypes and their relation to social structure. Here, the SCM reveals theoretically grounded, cross-cultural, cross-groups similarities and one difference across 10 non-US nations. Seven European (individualist) and three East Asian (collectivist) nations (N = 1, 028) support three hypothesized cross-cultural similarities: (a) perceived warmth and competence reliably differentiate societal group stereotypes; (b) many out-groups receive ambivalent stereotypes (high on one dimension; low on the other); and (c) high status groups stereotypically are competent, whereas competitive groups stereotypically lack warmth. Data uncover one consequential cross-cultural difference: (d) the more collectivist cultures do not locate reference groups (in-groups and societal prototype groups) in the most positive cluster (high-competence/high-warmth), unlike individualist cultures. This demonstrates out-group derogation without obvious reference-group favouritism. The SCM can serve as a pancultural tool for predicting group stereotypes from structural relations with other groups in society, and comparing across societies. PMID:19178758

  20. Conservation of connectivity of model-space effective interactions under a class of similarity transformation

    International Nuclear Information System (INIS)

    Duan Changkui; Gong Yungui; Dong Huining; Reid, Michael F.

    2004-01-01

    Effective interaction operators usually act on a restricted model space and give the same energies (for Hamiltonian) and matrix elements (for transition operators, etc.) as those of the original operators between the corresponding true eigenstates. Various types of effective operators are possible. Those well defined effective operators have been shown to be related to each other by similarity transformation. Some of the effective operators have been shown to have connected-diagram expansions. It is shown in this paper that under a class of very general similarity transformations, the connectivity is conserved. The similarity transformation between Hermitian and non-Hermitian Rayleigh-Schroedinger perturbative effective operators is one of such transformations and hence the connectivity can be deducted from each other

  1. Conservation of connectivity of model-space effective interactions under a class of similarity transformation.

    Science.gov (United States)

    Duan, Chang-Kui; Gong, Yungui; Dong, Hui-Ning; Reid, Michael F

    2004-09-15

    Effective interaction operators usually act on a restricted model space and give the same energies (for Hamiltonian) and matrix elements (for transition operators, etc.) as those of the original operators between the corresponding true eigenstates. Various types of effective operators are possible. Those well defined effective operators have been shown to be related to each other by similarity transformation. Some of the effective operators have been shown to have connected-diagram expansions. It is shown in this paper that under a class of very general similarity transformations, the connectivity is conserved. The similarity transformation between Hermitian and non-Hermitian Rayleigh-Schrodinger perturbative effective operators is one of such transformations and hence the connectivity can be deducted from each other.

  2. Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

    OpenAIRE

    Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.

    2016-01-01

    The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...

  3. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  4. QSAR and docking studies of anthraquinone derivatives by similarity cluster prediction.

    Science.gov (United States)

    Harsa, Alexandra M; Harsa, Teodora E; Diudea, Mircea V

    2016-01-01

    Forty anthraquinone derivatives have been downloaded from PubChem database and investigated in a quantitative structure-activity relationships (QSAR) study. The models describing log P and LD50 of this set were built up on the hypermolecule scheme that mimics the investigated receptor space; the models were validated by the leave-one-out procedure, in the external test set and in a new version of prediction by using similarity clusters. Molecular docking approach using Lamarckian Genetic Algorithm was made on this class of anthraquinones with respect to 3Q3B receptor. The best scored molecules in the docking assay were used as leaders in the similarity clustering procedure. It is demonstrated that the LD50 data of this set of anthraquinones are related to the binding energies of anthraquinone ligands to the 3Q3B receptor.

  5. Applications of Analytical Self-Similar Solutions of Reynolds-Averaged Models for Instability-Induced Turbulent Mixing

    Science.gov (United States)

    Hartland, Tucker; Schilling, Oleg

    2017-11-01

    Analytical self-similar solutions to several families of single- and two-scale, eddy viscosity and Reynolds stress turbulence models are presented for Rayleigh-Taylor, Richtmyer-Meshkov, and Kelvin-Helmholtz instability-induced turbulent mixing. The use of algebraic relationships between model coefficients and physical observables (e.g., experimental growth rates) following from the self-similar solutions to calibrate a member of a given family of turbulence models is shown. It is demonstrated numerically that the algebraic relations accurately predict the value and variation of physical outputs of a Reynolds-averaged simulation in flow regimes that are consistent with the simplifying assumptions used to derive the solutions. The use of experimental and numerical simulation data on Reynolds stress anisotropy ratios to calibrate a Reynolds stress model is briefly illustrated. The implications of the analytical solutions for future Reynolds-averaged modeling of hydrodynamic instability-induced mixing are briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  6. Hybrid modeling approach to improve the forecasting capability for the gaseous radionuclide in a nuclear site

    International Nuclear Information System (INIS)

    Jeong, Hyojoon; Hwang, Wontae; Kim, Eunhan; Han, Moonhee

    2012-01-01

    Highlights: ► This study is to improve the reliability of air dispersion modeling. ► Tracer experiments assumed gaseous radionuclides were conducted at a nuclear site. ► The performance of a hybrid modeling combined ISC with ANFIS was investigated.. ► Hybrid modeling approach shows better performance rather than a single ISC model. - Abstract: Predicted air concentrations of radioactive materials are important for an environmental impact assessment for the public health. In this study, the performance of a hybrid modeling combined with the industrial source complex (ISC) model and an adaptive neuro-fuzzy inference system (ANFIS) for predicting tracer concentrations was investigated. Tracer dispersion experiments were performed to produce the field data assuming the accidental release of radioactive material. ANFIS was trained in order that the outputs of the ISC model are similar to the measured data. Judging from the higher correlation coefficients between the measured and the calculated ones, the hybrid modeling approach could be an appropriate technique for an improvement of the modeling capability to predict the air concentrations for radioactive materials.

  7. Toward Better Mapping between Regulations and Operations of Enterprises Using Vocabularies and Semantic Similarity

    Directory of Open Access Journals (Sweden)

    Sagar Sunkle

    2015-12-01

    Full Text Available Industry governance, risk, and compliance (GRC solutions stand to gain from various analyses offered by formal compliance checking approaches. Such adoption is made difficult by the fact that most formal approaches assume that a mapping between concepts of regulations and models of operational specifics exists. Industry solutions offer tagging mechanisms to map regulations to operational specifics; however, they are mostly semi-formal in nature and tend to rely extensively on experts. We propose to use Semantics of Business Vocabularies and Rules along with similarity measures to create an explicit mapping between concepts of regulations and models of operational specifics of the enterprise. We believe that our work-in-progress takes a step toward adapting and leveraging formal compliance checking approaches in industry GRC solutions.

  8. Ethnic differences in the effects of media on body image: the effects of priming with ethnically different or similar models.

    Science.gov (United States)

    Bruns, Gina L; Carter, Michele M

    2015-04-01

    Media exposure has been positively correlated with body dissatisfaction. While body image concerns are common, being African American has been found to be a protective factor in the development of body dissatisfaction. Participants either viewed ten advertisements showing 1) ethnically-similar thin models; 2) ethnically-different thin models; 3) ethnically-similar plus-sized models; and 4) ethnically-diverse plus-sized models. Following exposure, body image was measured. African American women had less body dissatisfaction than Caucasian women. Ethnically-similar thin-model conditions did not elicit greater body dissatisfaction scores than ethnically-different thin or plus-sized models nor did the ethnicity of the model impact ratings of body dissatisfaction for women of either race. There were no differences among the African American women exposed to plus-sized versus thin models. Among Caucasian women exposure to plus-sized models resulted in greater body dissatisfaction than exposure to thin models. Results support existing literature that African American women experience less body dissatisfaction than Caucasian women even following exposure to an ethnically-similar thin model. Additionally, women exposed to plus-sized model conditions experienced greater body dissatisfaction than those shown thin models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Cosmological model with anisotropic dark energy and self-similarity of the second kind

    International Nuclear Information System (INIS)

    Brandt, Carlos F. Charret; Silva, Maria de Fatima A. da; Rocha, Jaime F. Villas da; Chan, Roberto

    2006-01-01

    We study the evolution of an anisotropic fluid with self-similarity of the second kind. We found a class of solution to the Einstein field equations by assuming an equation of state where the radial pressure of the fluid is proportional to its energy density (p r =ωρ) and that the fluid moves along time-like geodesics. The equation of state and the anisotropy with self-similarity of second kind imply ω = -1. The energy conditions, geometrical and physical properties of the solutions are studied. We have found that for the parameter α=-1/2 , it may represent a Big Rip cosmological model. (author)

  10. A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities

    Science.gov (United States)

    Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.

    2017-12-01

    The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.

  11. Information filtering based on transferring similarity.

    Science.gov (United States)

    Sun, Duo; Zhou, Tao; Liu, Jian-Guo; Liu, Run-Ran; Jia, Chun-Xiao; Wang, Bing-Hong

    2009-07-01

    In this Brief Report, we propose an index of user similarity, namely, the transferring similarity, which involves all high-order similarities between users. Accordingly, we design a modified collaborative filtering algorithm, which provides remarkably higher accurate predictions than the standard collaborative filtering. More interestingly, we find that the algorithmic performance will approach its optimal value when the parameter, contained in the definition of transferring similarity, gets close to its critical value, before which the series expansion of transferring similarity is convergent and after which it is divergent. Our study is complementary to the one reported in [E. A. Leicht, P. Holme, and M. E. J. Newman, Phys. Rev. E 73, 026120 (2006)], and is relevant to the missing link prediction problem.

  12. Similar Symmetries: The Role of Wallpaper Groups in Perceptual Texture Similarity

    Directory of Open Access Journals (Sweden)

    Fraser Halley

    2011-05-01

    Full Text Available Periodic patterns and symmetries are striking visual properties that have been used decoratively around the world throughout human history. Periodic patterns can be mathematically classified into one of 17 different Wallpaper groups, and while computational models have been developed which can extract an image's symmetry group, very little work has been done on how humans perceive these patterns. This study presents the results from a grouping experiment using stimuli from the different wallpaper groups. We find that while different images from the same wallpaper group are perceived as similar to one another, not all groups have the same degree of self-similarity. The similarity relationships between wallpaper groups appear to be dominated by rotations.

  13. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  14. Numerical modelling of carbonate platforms and reefs: approaches and opportunities

    Energy Technology Data Exchange (ETDEWEB)

    Dalmasso, H.; Montaggioni, L.F.; Floquet, M. [Universite de Provence, Marseille (France). Centre de Sedimentologie-Palaeontologie; Bosence, D. [Royal Holloway University of London, Egham (United Kingdom). Dept. of Geology

    2001-07-01

    This paper compares different computing procedures that have been utilized in simulating shallow-water carbonate platform development. Based on our geological knowledge we can usually give a rather accurate qualitative description of the mechanisms controlling geological phenomena. Further description requires the use of computer stratigraphic simulation models that allow quantitative evaluation and understanding of the complex interactions of sedimentary depositional carbonate systems. The roles of modelling include: (1) encouraging accuracy and precision in data collection and process interpretation (Watney et al., 1999); (2) providing a means to quantitatively test interpretations concerning the control of various mechanisms on producing sedimentary packages; (3) predicting or extrapolating results into areas of limited control; (4) gaining new insights regarding the interaction of parameters; (5) helping focus on future studies to resolve specific problems. This paper addresses two main questions, namely: (1) What are the advantages and disadvantages of various types of models? (2) How well do models perform? In this paper we compare and discuss the application of five numerical models: CARBONATE (Bosence and Waltham, 1990), FUZZIM (Nordlund, 1999), CARBPLAT (Bosscher, 1992), DYNACARB (Li et al., 1993), PHIL (Bowman, 1997) and SEDPAK (Kendall et al., 1991). The comparison, testing and evaluation of these models allow one to gain a better knowledge and understanding of controlling parameters of carbonate platform development, which are necessary for modelling. Evaluating numerical models, critically comparing results from models using different approaches, and pushing experimental tests to their limits, provide an effective vehicle to improve and develop new numerical models. A main feature of this paper is to closely compare the performance between two numerical models: a forward model (CARBONATE) and a fuzzy logic model (FUZZIM). These two models use common

  15. A hybrid agent-based approach for modeling microbiological systems.

    Science.gov (United States)

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  16. Parallel approach to identifying the well-test interpretation model using a neurocomputer

    Science.gov (United States)

    May, Edward A., Jr.; Dagli, Cihan H.

    1996-03-01

    The well test is one of the primary diagnostic and predictive tools used in the analysis of oil and gas wells. In these tests, a pressure recording device is placed in the well and the pressure response is recorded over time under controlled flow conditions. The interpreted results are indicators of the well's ability to flow and the damage done to the formation surrounding the wellbore during drilling and completion. The results are used for many purposes, including reservoir modeling (simulation) and economic forecasting. The first step in the analysis is the identification of the Well-Test Interpretation (WTI) model, which determines the appropriate solution method. Mis-identification of the WTI model occurs due to noise and non-ideal reservoir conditions. Previous studies have shown that a feed-forward neural network using the backpropagation algorithm can be used to identify the WTI model. One of the drawbacks to this approach is, however, training time, which can run into days of CPU time on personal computers. In this paper a similar neural network is applied using both a personal computer and a neurocomputer. Input data processing, network design, and performance are discussed and compared. The results show that the neurocomputer greatly eases the burden of training and allows the network to outperform a similar network running on a personal computer.

  17. The baryonic self similarity of dark matter

    International Nuclear Information System (INIS)

    Alard, C.

    2014-01-01

    The cosmological simulations indicates that dark matter halos have specific self-similar properties. However, the halo similarity is affected by the baryonic feedback. By using momentum-driven winds as a model to represent the baryon feedback, an equilibrium condition is derived which directly implies the emergence of a new type of similarity. The new self-similar solution has constant acceleration at a reference radius for both dark matter and baryons. This model receives strong support from the observations of galaxies. The new self-similar properties imply that the total acceleration at larger distances is scale-free, the transition between the dark matter and baryons dominated regime occurs at a constant acceleration, and the maximum amplitude of the velocity curve at larger distances is proportional to M 1/4 . These results demonstrate that this self-similar model is consistent with the basics of modified Newtonian dynamics (MOND) phenomenology. In agreement with the observations, the coincidence between the self-similar model and MOND breaks at the scale of clusters of galaxies. Some numerical experiments show that the behavior of the density near the origin is closely approximated by a Einasto profile.

  18. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  19. A neural approach for the numerical modeling of two-dimensional magnetic hysteresis

    International Nuclear Information System (INIS)

    Cardelli, E.; Faba, A.; Laudani, A.; Riganti Fulginei, F.; Salvini, A.

    2015-01-01

    This paper deals with a neural network approach to model magnetic hysteresis at macro-magnetic scale. Such approach to the problem seems promising in order to couple the numerical treatment of magnetic hysteresis to FEM numerical solvers of the Maxwell's equations in time domain, as in case of the non-linear dynamic analysis of electrical machines, and other similar devices, making possible a full computer simulation in a reasonable time. The neural system proposed consists of four inputs representing the magnetic field and the magnetic inductions components at each time step and it is trained by 2-d measurements performed on the magnetic material to be modeled. The magnetic induction B is assumed as entry point and the output of the neural system returns the predicted value of the field H at the same time step. A suitable partitioning of the neural system, described in the paper, makes the computing process rather fast. Validations with experimental tests and simulations for non-symmetric and minor loops are presented

  20. Similarity of trajectories taking into account geographic context

    Directory of Open Access Journals (Sweden)

    Maike Buchin

    2014-12-01

    Full Text Available The movements of animals, people, and vehicles are embedded in a geographic context. This context influences the movement and may cause the formation of certain behavioral responses. Thus, it is essential to include context parameters in the study of movement and the development of movement pattern analytics. Advances in sensor technologies and positioning devices provide valuable data not only of moving agents but also of the circumstances embedding the movement in space and time. Developing knowledge discovery methods to investigate the relation between movement and its surrounding context is a major challenge in movement analysis today. In this paper we show how to integrate geographic context into the similarity analysis of movement data. For this, we discuss models for geographic context of movement data. Based on this we develop simple but efficient context-aware similarity measures for movement trajectories, which combine a spatial and a contextual distance. These are based on well-known similarity measures for trajectories, such as the Hausdorff, Fréchet, or equal time distance. We validate our approach by applying these measures to movement data of hurricanes and albatross.

  1. Service creation: a model-based approach

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed

  2. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  3. From undefined red smear cheese consortia to minimal model communities both exhibiting similar anti-listerial activity on a cheese-like matrix.

    Science.gov (United States)

    Imran, M; Desmasures, N; Vernoux, J-P

    2010-12-01

    Starting from one undefined cheese smear consortium exhibiting anti-listerial activity (signal) at 15 °C, 50 yeasts and 39 bacteria were identified by partial rDNA sequencing. Construction of microbial communities was done either by addition or by erosion approach with the aim to obtain minimal communities having similar signal to that of the initial smear. The signal of these microbial communities was monitored in cheese microcosm for 14 days under ripening conditions. In the addition scheme, strains having significant signals were mixed step by step. Five-member communities, obtained by addition of a Gram negative bacterium to two yeasts and two Gram positive bacteria, enhanced the signal dramatically contrary to six-member communities including two Gram negative bacteria. In the erosion approach, a progressive reduction of 89 initial strains was performed. While intermediate communities (89, 44 and 22 members) exhibited a lower signal than initial smear consortium, eleven- and six-member communities gave a signal almost as efficient. It was noteworthy that the final minimal model communities obtained by erosion and addition approaches both had anti-listerial activity while consisting of different strains. In conclusion, some minimal model communities can have higher anti-listerial effectiveness than individual strains or the initial 89 micro-organisms from smear. Thus, microbial interactions are involved in the production and modulation of anti-listerial signals in cheese surface communities. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Assessing the polycyclic aromatic hydrocarbon (PAH) pollution of urban stormwater runoff: a dynamic modeling approach.

    Science.gov (United States)

    Zheng, Yi; Lin, Zhongrong; Li, Hao; Ge, Yan; Zhang, Wei; Ye, Youbin; Wang, Xuejun

    2014-05-15

    Urban stormwater runoff delivers a significant amount of polycyclic aromatic hydrocarbons (PAHs), mostly of atmospheric origin, to receiving water bodies. The PAH pollution of urban stormwater runoff poses serious risk to aquatic life and human health, but has been overlooked by environmental modeling and management. This study proposed a dynamic modeling approach for assessing the PAH pollution and its associated environmental risk. A variable time-step model was developed to simulate the continuous cycles of pollutant buildup and washoff. To reflect the complex interaction among different environmental media (i.e. atmosphere, dust and stormwater), the dependence of the pollution level on antecedent weather conditions was investigated and embodied in the model. Long-term simulations of the model can be efficiently performed, and probabilistic features of the pollution level and its risk can be easily determined. The applicability of this approach and its value to environmental management was demonstrated by a case study in Beijing, China. The results showed that Beijing's PAH pollution of road runoff is relatively severe, and its associated risk exhibits notable seasonal variation. The current sweeping practice is effective in mitigating the pollution, but the effectiveness is both weather-dependent and compound-dependent. The proposed modeling approach can help identify critical timing and major pollutants for monitoring, assessing and controlling efforts to be focused on. The approach is extendable to other urban areas, as well as to other contaminants with similar fate and transport as PAHs. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Scaling and interaction of self-similar modes in models of high Reynolds number wall turbulence.

    Science.gov (United States)

    Sharma, A S; Moarref, R; McKeon, B J

    2017-03-13

    Previous work has established the usefulness of the resolvent operator that maps the terms nonlinear in the turbulent fluctuations to the fluctuations themselves. Further work has described the self-similarity of the resolvent arising from that of the mean velocity profile. The orthogonal modes provided by the resolvent analysis describe the wall-normal coherence of the motions and inherit that self-similarity. In this contribution, we present the implications of this similarity for the nonlinear interaction between modes with different scales and wall-normal locations. By considering the nonlinear interactions between modes, it is shown that much of the turbulence scaling behaviour in the logarithmic region can be determined from a single arbitrarily chosen reference plane. Thus, the geometric scaling of the modes is impressed upon the nonlinear interaction between modes. Implications of these observations on the self-sustaining mechanisms of wall turbulence, modelling and simulation are outlined.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  6. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  7. A modeling approach for compounds affecting body composition.

    Science.gov (United States)

    Gennemark, Peter; Jansson-Löfmark, Rasmus; Hyberg, Gina; Wigstrand, Maria; Kakol-Palm, Dorota; Håkansson, Pernilla; Hovdal, Daniel; Brodin, Peter; Fritsch-Fredin, Maria; Antonsson, Madeleine; Ploj, Karolina; Gabrielsson, Johan

    2013-12-01

    Body composition and body mass are pivotal clinical endpoints in studies of welfare diseases. We present a combined effort of established and new mathematical models based on rigorous monitoring of energy intake (EI) and body mass in mice. Specifically, we parameterize a mechanistic turnover model based on the law of energy conservation coupled to a drug mechanism model. Key model variables are fat-free mass (FFM) and fat mass (FM), governed by EI and energy expenditure (EE). An empirical Forbes curve relating FFM to FM was derived experimentally for female C57BL/6 mice. The Forbes curve differs from a previously reported curve for male C57BL/6 mice, and we thoroughly analyse how the choice of Forbes curve impacts model predictions. The drug mechanism function acts on EI or EE, or both. Drug mechanism parameters (two to three parameters) and system parameters (up to six free parameters) could be estimated with good precision (coefficients of variation typically mass and FM changes at different drug provocations using a similar model for man. Surprisingly, model simulations indicate that an increase in EI (e.g. 10 %) was more efficient than an equal lowering of EI. Also, the relative change in body mass and FM is greater in man than in mouse at the same relative change in either EI or EE. We acknowledge that this assumes the same drug mechanism impact across the two species. A set of recommendations regarding the Forbes curve, vehicle control groups, dual action on EI and loss, and translational aspects are discussed. This quantitative approach significantly improves data interpretation, disease system understanding, safety assessment and translation across species.

  8. Individual differences in emotion processing: how similar are diffusion model parameters across tasks?

    Science.gov (United States)

    Mueller, Christina J; White, Corey N; Kuchinke, Lars

    2017-11-27

    The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.

  9. On a model-based approach to radiation protection

    International Nuclear Information System (INIS)

    Waligorski, M.P.R.

    2002-01-01

    There is a preoccupation with linearity and absorbed dose as the basic quantifiers of radiation hazard. An alternative is the fluence approach, whereby radiation hazard may be evaluated, at least in principle, via an appropriate action cross section. In order to compare these approaches, it may be useful to discuss them as quantitative descriptors of survival and transformation-like endpoints in cell cultures in vitro - a system thought to be relevant to modelling radiation hazard. If absorbed dose is used to quantify these biological endpoints, then non-linear dose-effect relations have to be described, and, e.g. after doses of densely ionising radiation, dose-correction factors as high as 20 are required. In the fluence approach only exponential effect-fluence relationships can be readily described. Neither approach alone exhausts the scope of experimentally observed dependencies of effect on dose or fluence. Two-component models, incorporating a suitable mixture of the two approaches, are required. An example of such a model is the cellular track structure theory developed by Katz over thirty years ago. The practical consequences of modelling radiation hazard using this mixed two-component approach are discussed. (author)

  10. Mathematical Modeling Approaches in Plant Metabolomics.

    Science.gov (United States)

    Fürtauer, Lisa; Weiszmann, Jakob; Weckwerth, Wolfram; Nägele, Thomas

    2018-01-01

    The experimental analysis of a plant metabolome typically results in a comprehensive and multidimensional data set. To interpret metabolomics data in the context of biochemical regulation and environmental fluctuation, various approaches of mathematical modeling have been developed and have proven useful. In this chapter, a general introduction to mathematical modeling is presented and discussed in context of plant metabolism. A particular focus is laid on the suitability of mathematical approaches to functionally integrate plant metabolomics data in a metabolic network and combine it with other biochemical or physiological parameters.

  11. Self-similar formation of the Kolmogorov spectrum in the Leith model of turbulence

    International Nuclear Information System (INIS)

    Nazarenko, S V; Grebenev, V N

    2017-01-01

    The last stage of evolution toward the stationary Kolmogorov spectrum of hydrodynamic turbulence is studied using the Leith model [1]. This evolution is shown to manifest itself as a reflection wave in the wavenumber space propagating from the largest toward the smallest wavenumbers, and is described by a self-similar solution of a new (third) kind. This stage follows the previously studied stage of an initial explosive propagation of the spectral front from the smallest to the largest wavenumbers reaching arbitrarily large wavenumbers in a finite time, and which was described by a self-similar solution of the second kind [2–4]. Nonstationary solutions corresponding to ‘warm cascades’ characterised by a thermalised spectrum at large wavenumbers are also obtained. (paper)

  12. Geomfinder: a multi-feature identifier of similar three-dimensional protein patterns: a ligand-independent approach.

    Science.gov (United States)

    Núñez-Vivanco, Gabriel; Valdés-Jiménez, Alejandro; Besoaín, Felipe; Reyes-Parada, Miguel

    2016-01-01

    Since the structure of proteins is more conserved than the sequence, the identification of conserved three-dimensional (3D) patterns among a set of proteins, can be important for protein function prediction, protein clustering, drug discovery and the establishment of evolutionary relationships. Thus, several computational applications to identify, describe and compare 3D patterns (or motifs) have been developed. Often, these tools consider a 3D pattern as that described by the residues surrounding co-crystallized/docked ligands available from X-ray crystal structures or homology models. Nevertheless, many of the protein structures stored in public databases do not provide information about the location and characteristics of ligand binding sites and/or other important 3D patterns such as allosteric sites, enzyme-cofactor interaction motifs, etc. This makes necessary the development of new ligand-independent methods to search and compare 3D patterns in all available protein structures. Here we introduce Geomfinder, an intuitive, flexible, alignment-free and ligand-independent web server for detailed estimation of similarities between all pairs of 3D patterns detected in any two given protein structures. We used around 1100 protein structures to form pairs of proteins which were assessed with Geomfinder. In these analyses each protein was considered in only one pair (e.g. in a subset of 100 different proteins, 50 pairs of proteins can be defined). Thus: (a) Geomfinder detected identical pairs of 3D patterns in a series of monoamine oxidase-B structures, which corresponded to the effectively similar ligand binding sites at these proteins; (b) we identified structural similarities among pairs of protein structures which are targets of compounds such as acarbose, benzamidine, adenosine triphosphate and pyridoxal phosphate; these similar 3D patterns are not detected using sequence-based methods; (c) the detailed evaluation of three specific cases showed the versatility

  13. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  14. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    Science.gov (United States)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  15. Similarity of eigenstates in generalized labyrinth tilings

    International Nuclear Information System (INIS)

    Thiem, Stefanie; Schreiber, Michael

    2010-01-01

    The eigenstates of d-dimensional quasicrystalline models with a separable Hamiltonian are studied within the tight-binding model. The approach is based on mathematical sequences, constructed by an inflation rule P = {w → s,s → sws b-1 } describing the weak/strong couplings of atoms in a quasiperiodic chain. Higher-dimensional quasiperiodic tilings are constructed as a direct product of these chains and their eigenstates can be directly calculated by multiplying the energies E or wave functions ψ of the chain, respectively. Applying this construction rule, the grid in d dimensions splits into 2 d-1 different tilings, for which we investigated the characteristics of the wave functions. For the standard two-dimensional labyrinth tiling constructed from the octonacci sequence (b = 2) the lattice breaks up into two identical lattices, which consequently yield the same eigenstates. While this is not the case for b ≠ 2, our numerical results show that the wave functions of the different grids become increasingly similar for large system sizes. This can be explained by the fact that the structure of the 2 d-1 grids mainly differs at the boundaries and thus for large systems the eigenstates approach each other. This property allows us to analytically derive properties of the higher-dimensional generalized labyrinth tilings from the one-dimensional results. In particular participation numbers and corresponding scaling exponents have been determined.

  16. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Uncertainty Fitting algorithm (SUFI-2) and the SWAT-CUP interface, followed by a manual water quality calibration on a monthly basis. The refined modeling approach developed in this study led to successful predictions across most parts of the Corn Belt region and can be used for testing pollution mitigation measures and agricultural economic scenarios, providing useful information to policy makers and recommendations on similar efforts at the regional scale.

  17. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  18. Large revealing similarity in multihadron production in nuclear and particle collisions

    International Nuclear Information System (INIS)

    Mishra, Aditya Nath; Sahoo, Raghunath; Sarkisyan, Edward K.G.; Sakharov, Alexander S.; )

    2016-01-01

    The dependencies of charged particle pseudorapidity density and transverse energy pseudorapidity density at midrapidity as well as of charged particle total multiplicity on the collision energy and on the number of nucleon participants, or centrality, measured in nucleus-nucleus collisions are studied in the energy range spanning a few GeV to a few TeV per nucleon. The model in which the multiparticle production is driven by the dissipating effective energy of participants is considered. The model extends the earlier proposed approach, combining the constituent quark picture together with Landau relativistic hydrodynamics shown to interrelate the measurements from different types of collisions. Within this model, the dependence of the charged particle pseudorapidity density and transverse energy pseudorapidity density at midrapidity on the number of participants in heavy-ion collisions are found to be well described in terms of the effective energy defined as a centrality-dependent fraction of the collision energy. For both variables the effective energy approach reveals a similarity in the energy dependence obtained for the most central collisions and centrality data in the entire available energy range

  19. Modelling individual differences in the form of Pavlovian conditioned approach responses: a dual learning systems approach with factored representations.

    Directory of Open Access Journals (Sweden)

    Florian Lesaint

    2014-02-01

    Full Text Available Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US, some rats (sign-trackers come to approach and engage the conditioned stimulus (CS itself - a lever - more and more avidly, whereas other rats (goal-trackers learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in

  20. Modelling Individual Differences in the Form of Pavlovian Conditioned Approach Responses: A Dual Learning Systems Approach with Factored Representations

    Science.gov (United States)

    Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B.; Robinson, Terry E.; Khamassi, Mehdi

    2014-01-01

    Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself – a lever – more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in

  1. Modelling individual differences in the form of Pavlovian conditioned approach responses: a dual learning systems approach with factored representations.

    Science.gov (United States)

    Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B; Robinson, Terry E; Khamassi, Mehdi

    2014-02-01

    Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself - a lever - more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational

  2. A Partial Least Square Approach for Modeling Gene-gene and Gene-environment Interactions When Multiple Markers Are Genotyped

    Science.gov (United States)

    Wang, Tao; Ho, Gloria; Ye, Kenny; Strickler, Howard; Elston, Robert C.

    2008-01-01

    Genetic association studies achieve an unprecedented level of resolution in mapping disease genes by genotyping dense SNPs in a gene region. Meanwhile, these studies require new powerful statistical tools that can optimally handle a large amount of information provided by genotype data. A question that arises is how to model interactions between two genes. Simply modeling all possible interactions between the SNPs in two gene regions is not desirable because a greatly increased number of degrees of freedom can be involved in the test statistic. We introduce an approach to reduce the genotype dimension in modeling interactions. The genotype compression of this approach is built upon the information on both the trait and the cross-locus gametic disequilibrium between SNPs in two interacting genes, in such a way as to parsimoniously model the interactions without loss of useful information in the process of dimension reduction. As a result, it improves power to detect association in the presence of gene-gene interactions. This approach can be similarly applied for modeling gene-environment interactions. We compare this method with other approaches: the corresponding test without modeling any interaction, that based on a saturated interaction model, that based on principal component analysis, and that based on Tukey’s 1-df model. Our simulations suggest that this new approach has superior power to that of the other methods. In an application to endometrial cancer case-control data from the Women’s Health Initiative (WHI), this approach detected AKT1 and AKT2 as being significantly associated with endometrial cancer susceptibility by taking into account their interactions with BMI. PMID:18615621

  3. A partial least-square approach for modeling gene-gene and gene-environment interactions when multiple markers are genotyped.

    Science.gov (United States)

    Wang, Tao; Ho, Gloria; Ye, Kenny; Strickler, Howard; Elston, Robert C

    2009-01-01

    Genetic association studies achieve an unprecedented level of resolution in mapping disease genes by genotyping dense single nucleotype polymorphisms (SNPs) in a gene region. Meanwhile, these studies require new powerful statistical tools that can optimally handle a large amount of information provided by genotype data. A question that arises is how to model interactions between two genes. Simply modeling all possible interactions between the SNPs in two gene regions is not desirable because a greatly increased number of degrees of freedom can be involved in the test statistic. We introduce an approach to reduce the genotype dimension in modeling interactions. The genotype compression of this approach is built upon the information on both the trait and the cross-locus gametic disequilibrium between SNPs in two interacting genes, in such a way as to parsimoniously model the interactions without loss of useful information in the process of dimension reduction. As a result, it improves power to detect association in the presence of gene-gene interactions. This approach can be similarly applied for modeling gene-environment interactions. We compare this method with other approaches, the corresponding test without modeling any interaction, that based on a saturated interaction model, that based on principal component analysis, and that based on Tukey's one-degree-of-freedom model. Our simulations suggest that this new approach has superior power to that of the other methods. In an application to endometrial cancer case-control data from the Women's Health Initiative, this approach detected AKT1 and AKT2 as being significantly associated with endometrial cancer susceptibility by taking into account their interactions with body mass index.

  4. Fast business process similarity search

    NARCIS (Netherlands)

    Yan, Z.; Dijkman, R.M.; Grefen, P.W.P.J.

    2012-01-01

    Nowadays, it is common for organizations to maintain collections of hundreds or even thousands of business processes. Techniques exist to search through such a collection, for business process models that are similar to a given query model. However, those techniques compare the query model to each

  5. Similar Biophysical Abnormalities in Glomeruli and Podocytes from Two Distinct Models.

    Science.gov (United States)

    Embry, Addie E; Liu, Zhenan; Henderson, Joel M; Byfield, F Jefferson; Liu, Liping; Yoon, Joonho; Wu, Zhenzhen; Cruz, Katrina; Moradi, Sara; Gillombardo, C Barton; Hussain, Rihanna Z; Doelger, Richard; Stuve, Olaf; Chang, Audrey N; Janmey, Paul A; Bruggeman, Leslie A; Miller, R Tyler

    2018-03-23

    Background FSGS is a pattern of podocyte injury that leads to loss of glomerular function. Podocytes support other podocytes and glomerular capillary structure, oppose hemodynamic forces, form the slit diaphragm, and have mechanical properties that permit these functions. However, the biophysical characteristics of glomeruli and podocytes in disease remain unclear. Methods Using microindentation, atomic force microscopy, immunofluorescence microscopy, quantitative RT-PCR, and a three-dimensional collagen gel contraction assay, we studied the biophysical and structural properties of glomeruli and podocytes in chronic (Tg26 mice [HIV protein expression]) and acute (protamine administration [cytoskeletal rearrangement]) models of podocyte injury. Results Compared with wild-type glomeruli, Tg26 glomeruli became progressively more deformable with disease progression, despite increased collagen content. Tg26 podocytes had disordered cytoskeletons, markedly abnormal focal adhesions, and weaker adhesion; they failed to respond to mechanical signals and exerted minimal traction force in three-dimensional collagen gels. Protamine treatment had similar but milder effects on glomeruli and podocytes. Conclusions Reduced structural integrity of Tg26 podocytes causes increased deformability of glomerular capillaries and limits the ability of capillaries to counter hemodynamic force, possibly leading to further podocyte injury. Loss of normal podocyte mechanical integrity could injure neighboring podocytes due to the absence of normal biophysical signals required for podocyte maintenance. The severe defects in podocyte mechanical behavior in the Tg26 model may explain why Tg26 glomeruli soften progressively, despite increased collagen deposition, and may be the basis for the rapid course of glomerular diseases associated with severe podocyte injury. In milder injury (protamine), similar processes occur but over a longer time. Copyright © 2018 by the American Society of Nephrology.

  6. A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model

    Science.gov (United States)

    Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue

    2015-10-01

    In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.

  7. Measuring similarity between business process models

    NARCIS (Netherlands)

    Dongen, van B.F.; Dijkman, R.M.; Mendling, J.

    2007-01-01

    Quality aspects become increasingly important when business process modeling is used in a large-scale enterprise setting. In order to facilitate a storage without redundancy and an efficient retrieval of relevant process models in model databases it is required to develop a theoretical understanding

  8. QSAR models based on quantum topological molecular similarity.

    Science.gov (United States)

    Popelier, P L A; Smith, P J

    2006-07-01

    A new method called quantum topological molecular similarity (QTMS) was fairly recently proposed [J. Chem. Inf. Comp. Sc., 41, 2001, 764] to construct a variety of medicinal, ecological and physical organic QSAR/QSPRs. QTMS method uses quantum chemical topology (QCT) to define electronic descriptors drawn from modern ab initio wave functions of geometry-optimised molecules. It was shown that the current abundance of computing power can be utilised to inject realistic descriptors into QSAR/QSPRs. In this article we study seven datasets of medicinal interest : the dissociation constants (pK(a)) for a set of substituted imidazolines , the pK(a) of imidazoles , the ability of a set of indole derivatives to displace [(3)H] flunitrazepam from binding to bovine cortical membranes , the influenza inhibition constants for a set of benzimidazoles , the interaction constants for a set of amides and the enzyme liver alcohol dehydrogenase , the natriuretic activity of sulphonamide carbonic anhydrase inhibitors and the toxicity of a series of benzyl alcohols. A partial least square analysis in conjunction with a genetic algorithm delivered excellent models. They are also able to highlight the active site, of the ligand or the molecule whose structure determines the activity. The advantages and limitations of QTMS are discussed.

  9. Bilateral Trade Flows and Income Distribution Similarity

    Science.gov (United States)

    2016-01-01

    Current models of bilateral trade neglect the effects of income distribution. This paper addresses the issue by accounting for non-homothetic consumer preferences and hence investigating the role of income distribution in the context of the gravity model of trade. A theoretically justified gravity model is estimated for disaggregated trade data (Dollar volume is used as dependent variable) using a sample of 104 exporters and 108 importers for 1980–2003 to achieve two main goals. We define and calculate new measures of income distribution similarity and empirically confirm that greater similarity of income distribution between countries implies more trade. Using distribution-based measures as a proxy for demand similarities in gravity models, we find consistent and robust support for the hypothesis that countries with more similar income-distributions trade more with each other. The hypothesis is also confirmed at disaggregated level for differentiated product categories. PMID:27137462

  10. A Discrete Monetary Economic Growth Model with the MIU Approach

    Directory of Open Access Journals (Sweden)

    Wei-Bin Zhang

    2008-01-01

    Full Text Available This paper proposes an alternative approach to economic growth with money. The production side is the same as the Solow model, the Ramsey model, and the Tobin model. But we deal with behavior of consumers differently from the traditional approaches. The model is influenced by the money-in-the-utility (MIU approach in monetary economics. It provides a mechanism of endogenous saving which the Solow model lacks and avoids the assumption of adding up utility over a period of time upon which the Ramsey approach is based.

  11. Visual reconciliation of alternative similarity spaces in climate modeling

    Science.gov (United States)

    J Poco; A Dasgupta; Y Wei; William Hargrove; C.R. Schwalm; D.N. Huntzinger; R Cook; E Bertini; C.T. Silva

    2015-01-01

    Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses...

  12. Inspiration or deflation? Feeling similar or dissimilar to slim and plus-size models affects self-evaluation of restrained eaters.

    Science.gov (United States)

    Papies, Esther K; Nicolaije, Kim A H

    2012-01-01

    The present studies examined the effect of perceiving images of slim and plus-size models on restrained eaters' self-evaluation. While previous research has found that such images can lead to either inspiration or deflation, we argue that these inconsistencies can be explained by differences in perceived similarity with the presented model. The results of two studies (ns=52 and 99) confirmed this and revealed that restrained eaters with high (low) perceived similarity to the model showed more positive (negative) self-evaluations when they viewed a slim model, compared to a plus-size model. In addition, Study 2 showed that inducing in participants a similarities mindset led to more positive self-evaluations after viewing a slim compared to a plus-size model, but only among restrained eaters with a relatively high BMI. These results are discussed in the context of research on social comparison processes and with regard to interventions for protection against the possible detrimental effects of media images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. A touch-probe path generation method through similarity analysis between the feature vectors in new and old models

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Hye Sung; Lee, Jin Won; Yang, Jeong Sam [Dept. of Industrial Engineering, Ajou University, Suwon (Korea, Republic of)

    2016-10-15

    The On-machine measurement (OMM), which measures a work piece during or after the machining process in the machining center, has the advantage of measuring the work piece directly within the work space without moving it. However, the path generation procedure used to determine the measuring sequence and variables for the complex features of a target work piece has the limitation of requiring time-consuming tasks to generate the measuring points and mostly relies on the proficiency of the on-site engineer. In this study, we propose a touch-probe path generation method using similarity analysis between the feature vectors of three-dimensional (3-D) shapes for the OMM. For the similarity analysis between a new 3-D model and existing 3-D models, we extracted the feature vectors from models that can describe the characteristics of a geometric shape model; then, we applied those feature vectors to a geometric histogram that displays a probability distribution obtained by the similarity analysis algorithm. In addition, we developed a computer-aided inspection planning system that corrects non-applied measuring points that are caused by minute geometry differences between the two models and generates the final touch-probe path.

  14. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    Science.gov (United States)

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents

  15. Nonperturbative approach to the attractive Hubbard model

    International Nuclear Information System (INIS)

    Allen, S.; Tremblay, A.-M. S.

    2001-01-01

    A nonperturbative approach to the single-band attractive Hubbard model is presented in the general context of functional-derivative approaches to many-body theories. As in previous work on the repulsive model, the first step is based on a local-field-type ansatz, on enforcement of the Pauli principle and a number of crucial sumrules. The Mermin-Wagner theorem in two dimensions is automatically satisfied. At this level, two-particle self-consistency has been achieved. In the second step of the approximation, an improved expression for the self-energy is obtained by using the results of the first step in an exact expression for the self-energy, where the high- and low-frequency behaviors appear separately. The result is a cooperon-like formula. The required vertex corrections are included in this self-energy expression, as required by the absence of a Migdal theorem for this problem. Other approaches to the attractive Hubbard model are critically compared. Physical consequences of the present approach and agreement with Monte Carlo simulations are demonstrated in the accompanying paper (following this one)

  16. A solvable self-similar model of the sausage instability in a resistive Z pinch

    International Nuclear Information System (INIS)

    Lampe, M.

    1991-01-01

    A solvable model is developed for the linearized sausage mode within the context of resistive magnetohydrodynamics. The model is based on the assumption that the fluid motion of the plasma is self-similar, as well as several assumptions pertinent to the limit of wavelength long compared to the pinch radius. The perturbations to the magnetic field are not assumed to be self-similar, but rather are calculated. Effects arising from time dependences of the z-independent perturbed state, e.g., current rising as t α , Ohmic heating, and time variation of the pinch radius, are included in the analysis. The formalism appears to provide a good representation of ''global'' modes that involve coherent sausage distortion of the entire cross section of the pinch, but excludes modes that are localized radially, and higher radial eigenmodes. For this and other reasons, it is expected that the model underestimates the maximum instability growth rates, but is reasonable for global sausage modes. The net effect of resistivity and time variation of the unperturbed state is to decrease the growth rate if α approx-lt 1, but never by more than a factor of about 2. The effect is to increase the growth rate if α approx-gt 1

  17. From epidemics to information propagation: Striking differences in structurally similar adaptive network models

    Science.gov (United States)

    Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet

    2015-09-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.

  18. The self-similar field and its application to a diffusion problem

    International Nuclear Information System (INIS)

    Michelitsch, Thomas M

    2011-01-01

    We introduce a continuum approach which accounts for self-similarity as a symmetry property of an infinite medium. A self-similar Laplacian operator is introduced which is the source of self-similar continuous fields. In this way ‘self-similar symmetry’ appears in an analogous manner as transverse isotropy or cubic symmetry of a medium. As a consequence of the self-similarity the Laplacian is a non-local fractional operator obtained as the continuum limit of the discrete self-similar Laplacian introduced recently by Michelitsch et al (2009 Phys. Rev. E 80 011135). The dispersion relation of the Laplacian and its Green’s function is deduced in closed forms. As a physical application of the approach we analyze a self-similar diffusion problem. The statistical distributions, which constitute the solutions of this problem, turn out to be Lévi-stable distributions with infinite variances characterizing the statistics of one-dimensional Lévi flights. The self-similar continuum approach introduced in this paper has the potential to be applied on a variety of scale invariant and fractal problems in physics such as in continuum mechanics, electrodynamics and in other fields. (paper)

  19. Interbehavioral psychology and radical behaviorism: Some similarities and differences

    Science.gov (United States)

    Morris, Edward K.

    1984-01-01

    Both J. R. Kantor's interbehavioral psychology and B. F. Skinner's radical behaviorism represent wellarticulated approaches to a natural science of behavior. As such, they share a number of similar features, yet they also differ on a number of dimensions. Some of these similarities and differences are examined by describing their emergence in the professional literature and by comparing the respective units of analysis of the two approaches—the interbehavioral field and the three-term contingency. An evaluation of the similarities and differences shows the similarities to be largely fundamental, and the differences largely ones of emphasis. Nonetheless, the two approaches do make unique contributions to a natural science of behavior, the integration of which can facilitate the development of that science and its acceptance among other sciences and within society at large. PMID:22478612

  20. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  1. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship

  2. A moving approach for the Vector Hysteron Model

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)

    2016-04-01

    A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.

  3. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating......, and allows direct incorporation of high-level and qualitative plant knowledge into themodel. These advantages have proven to be very appealing for industrial applications, and the practical, intuitively appealing nature of the framework isdemonstrated in chapters describing applications of local methods...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning...

  5. Stochastic approaches to inflation model building

    International Nuclear Information System (INIS)

    Ramirez, Erandy; Liddle, Andrew R.

    2005-01-01

    While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered

  6. Modeling Fetal Weight for Gestational Age: A Comparison of a Flexible Multi-level Spline-based Model with Other Approaches

    Science.gov (United States)

    Villandré, Luc; Hutcheon, Jennifer A; Perez Trejo, Maria Esther; Abenhaim, Haim; Jacobsen, Geir; Platt, Robert W

    2011-01-01

    We present a model for longitudinal measures of fetal weight as a function of gestational age. We use a linear mixed model, with a Box-Cox transformation of fetal weight values, and restricted cubic splines, in order to flexibly but parsimoniously model median fetal weight. We systematically compare our model to other proposed approaches. All proposed methods are shown to yield similar median estimates, as evidenced by overlapping pointwise confidence bands, except after 40 completed weeks, where our method seems to produce estimates more consistent with observed data. Sex-based stratification affects the estimates of the random effects variance-covariance structure, without significantly changing sex-specific fitted median values. We illustrate the benefits of including sex-gestational age interaction terms in the model over stratification. The comparison leads to the conclusion that the selection of a model for fetal weight for gestational age can be based on the specific goals and configuration of a given study without affecting the precision or value of median estimates for most gestational ages of interest. PMID:21931571

  7. User recommendation in healthcare social media by assessing user similarity in heterogeneous network.

    Science.gov (United States)

    Jiang, Ling; Yang, Christopher C

    2017-09-01

    The rapid growth of online health social websites has captured a vast amount of healthcare information and made the information easy to access for health consumers. E-patients often use these social websites for informational and emotional support. However, health consumers could be easily overwhelmed by the overloaded information. Healthcare information searching can be very difficult for consumers, not to mention most of them are not skilled information searcher. In this work, we investigate the approaches for measuring user similarity in online health social websites. By recommending similar users to consumers, we can help them to seek informational and emotional support in a more efficient way. We propose to represent the healthcare social media data as a heterogeneous healthcare information network and introduce the local and global structural approaches for measuring user similarity in a heterogeneous network. We compare the proposed structural approaches with the content-based approach. Experiments were conducted on a dataset collected from a popular online health social website, and the results showed that content-based approach performed better for inactive users, while structural approaches performed better for active users. Moreover, global structural approach outperformed local structural approach for all user groups. In addition, we conducted experiments on local and global structural approaches using different weight schemas for the edges in the network. Leverage performed the best for both local and global approaches. Finally, we integrated different approaches and demonstrated that hybrid method yielded better performance than the individual approach. The results indicate that content-based methods can effectively capture the similarity of inactive users who usually have focused interests, while structural methods can achieve better performance when rich structural information is available. Local structural approach only considers direct connections

  8. Catchment Morphing (CM): A Novel Approach for Runoff Modeling in Ungauged Catchments

    Science.gov (United States)

    Zhang, Jun; Han, Dawei

    2017-12-01

    Runoff prediction in ungauged catchments has been one of the major challenges in the past decades. However, due to the tremendous heterogeneity of the catchments, obstacles exist in deducing model parameters for ungauged catchments from gauged ones. We propose a novel approach to predict ungauged runoff with Catchment Morphing (CM) using a fully distributed model. CM is defined as by changing the catchment characteristics (area and slope here) from the baseline model built with a gauged catchment to model the ungauged ones. As a proof of concept, a case study on seven catchments in the UK has been used to demonstrate the proposed scheme. Comparing the predicted with measured runoff, the Nash-Sutcliffe efficiency (NSE) varies from 0.03 to 0.69 in six catchments. Moreover, NSEs are significantly improved (up to 0.81) when considering the discrepancy of percentage runoff between the target and baseline catchments. A distinct advantage has been experienced by comparing the CM with a traditional method for ungauged catchments. The advantages are: (a) less demand of the similarity between the baseline catchment and the ungauged catchment, (b) less demand of available data, and (c) potentially widely applicable in varied catchments. This study demonstrates the feasibility of the proposed scheme as a potentially powerful alternative to the conventional methods in runoff predictions of ungauged catchments. Clearly, more work beyond this pilot study is needed to explore and develop this new approach further to maturity by the hydrological community.

  9. Renewing the Respect for Similarity

    Directory of Open Access Journals (Sweden)

    Shimon eEdelman

    2012-07-01

    Full Text Available In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemmingfrom its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problemat hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, bysurveying established results and new developments in the theory and methods of similarity-preservingassociative lookup and dimensionality reduction — critical components of many cognitive functions, aswell as of intelligent data management in computer vision. We focus in particular on the growing familyof algorithms that support associative memory by performing hashing that respects local similarity, andon the uses of similarity in representing structured objects and scenes. Insofar as these similarity-basedideas and methods are useful in cognitive modeling and in AI applications, they should be included inthe core conceptual toolkit of computational neuroscience.

  10. Towards predictive resistance models for agrochemicals by combining chemical and protein similarity via proteochemometric modelling.

    Science.gov (United States)

    van Westen, Gerard J P; Bender, Andreas; Overington, John P

    2014-10-01

    Resistance to pesticides is an increasing problem in agriculture. Despite practices such as phased use and cycling of 'orthogonally resistant' agents, resistance remains a major risk to national and global food security. To combat this problem, there is a need for both new approaches for pesticide design, as well as for novel chemical entities themselves. As summarized in this opinion article, a technique termed 'proteochemometric modelling' (PCM), from the field of chemoinformatics, could aid in the quantification and prediction of resistance that acts via point mutations in the target proteins of an agent. The technique combines information from both the chemical and biological domain to generate bioactivity models across large numbers of ligands as well as protein targets. PCM has previously been validated in prospective, experimental work in the medicinal chemistry area, and it draws on the growing amount of bioactivity information available in the public domain. Here, two potential applications of proteochemometric modelling to agrochemical data are described, based on previously published examples from the medicinal chemistry literature.

  11. Fuzzy Continuous Review Inventory Model using ABC Multi-Criteria Classification Approach: A Single Case Study

    Directory of Open Access Journals (Sweden)

    Meriastuti - Ginting

    2015-07-01

    Full Text Available Abstract. Inventory is considered as the most expensive, yet important,to any companies. It representsapproximately 50% of the total investment. Inventory cost has become one of the majorcontributorsto inefficiency, therefore it should be managed effectively. This study aims to propose an alternative inventory model,  by using ABC multi-criteria classification approach to minimize total cost. By combining FANP (Fuzzy Analytical Network Process and TOPSIS (Technique of Order Preferences by Similarity to the Ideal Solution, the ABC multi-criteria classification approach identified 12 items of 69 inventory items as “outstanding important class” that contributed to 80% total inventory cost. This finding  is then used as the basis to determine the proposed continuous review inventory model.This study found that by using fuzzy trapezoidal cost, the inventory  turnover ratio can be increased, and inventory cost can be decreased by 78% for each item in “class A” inventory.Keywords:ABC multi-criteria classification, FANP-TOPSIS, continuous review inventory model lead-time demand distribution, trapezoidal fuzzy number 

  12. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential.

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A; Jolliet, Olivier; Georgopoulos, Panos G; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A; Vallero, Daniel A

    2013-08-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA's need to develop novel approaches and tools for rapidly prioritizing chemicals, a "Challenge" was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA's effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A.; Jolliet, Olivier; Georgopoulos, Panos G.; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A.; Vallero, Daniel A.

    2014-01-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA’s need to develop novel approaches and tools for rapidly prioritizing chemicals, a “Challenge” was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA’s effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. PMID:23707726

  14. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    Science.gov (United States)

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  15. A Conceptual Modeling Approach for OLAP Personalization

    Science.gov (United States)

    Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan

    Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.

  16. Discovering Music Structure via Similarity Fusion

    DEFF Research Database (Denmark)

    for representing music structure is studied in a simplified scenario consisting of 4412 songs and two similarity measures among them. The results suggest that the PLSA model is a useful framework to combine different sources of information, and provides a reasonable space for song representation.......Automatic methods for music navigation and music recommendation exploit the structure in the music to carry out a meaningful exploration of the “song space”. To get a satisfactory performance from such systems, one should incorporate as much information about songs similarity as possible; however...... semantics”, in such a way that all observed similarities can be satisfactorily explained using the latent semantics. Therefore, one can think of these semantics as the real structure in music, in the sense that they can explain the observed similarities among songs. The suitability of the PLSA model...

  17. Discovering Music Structure via Similarity Fusion

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Parrado-Hernandez, Emilio; Meng, Anders

    Automatic methods for music navigation and music recommendation exploit the structure in the music to carry out a meaningful exploration of the “song space”. To get a satisfactory performance from such systems, one should incorporate as much information about songs similarity as possible; however...... semantics”, in such a way that all observed similarities can be satisfactorily explained using the latent semantics. Therefore, one can think of these semantics as the real structure in music, in the sense that they can explain the observed similarities among songs. The suitability of the PLSA model...... for representing music structure is studied in a simplified scenario consisting of 4412 songs and two similarity measures among them. The results suggest that the PLSA model is a useful framework to combine different sources of information, and provides a reasonable space for song representation....

  18. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    NARCIS (Netherlands)

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both

  19. Similarity between neonatal profile and socioeconomic index: a spatial approach

    Directory of Open Access Journals (Sweden)

    d'Orsi Eleonora

    2005-01-01

    Full Text Available This study aims to compare neonatal characteristics and socioeconomic conditions in Rio de Janeiro city neighborhoods in order to identify priority areas for intervention. The study design was ecological. Two databases were used: the Brazilian Population Census and the Live Birth Information System, aggregated by neighborhoods. Spatial analysis, multivariate cluster classification, and Moran's I statistics for detection of spatial clustering were used. A similarity index was created to compare socioeconomic clusters with the neonatal profile in each neighborhood. The proportions of Apgar score above 8 and cesarean sections showed positive spatial correlation and high similarity with the socioeconomic index. The proportion of low birth weight infants showed a random spatial distribution, indicating that at this scale of analysis, birth weight is not sufficiently sensitive to discriminate subtler differences among population groups. The observed relationship between the neighborhoods' neonatal profile (particularly Apgar score and mode of delivery and socioeconomic conditions shows evidence of a change in infant health profile, where the possibility for intervention shifts to medical services and the Apgar score assumes growing significance as a risk indicator.

  20. A new approach for developing adjoint models

    Science.gov (United States)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and

  1. Sherlock: A Semi-automatic Framework for Quiz Generation Using a Hybrid Semantic Similarity Measure.

    Science.gov (United States)

    Lin, Chenghua; Liu, Dong; Pang, Wei; Wang, Zhe

    In this paper, we present a semi-automatic system (Sherlock) for quiz generation using linked data and textual descriptions of RDF resources. Sherlock is distinguished from existing quiz generation systems in its generic framework for domain-independent quiz generation as well as in the ability of controlling the difficulty level of the generated quizzes. Difficulty scaling is non-trivial, and it is fundamentally related to cognitive science. We approach the problem with a new angle by perceiving the level of knowledge difficulty as a similarity measure problem and propose a novel hybrid semantic similarity measure using linked data. Extensive experiments show that the proposed semantic similarity measure outperforms four strong baselines with more than 47 % gain in clustering accuracy. In addition, we discovered in the human quiz test that the model accuracy indeed shows a strong correlation with the pairwise quiz similarity.

  2. An optimisation approach for capacity planning: modelling insights and empirical findings from a tactical perspective

    Directory of Open Access Journals (Sweden)

    Andréa Nunes Carvalho

    2017-09-01

    Full Text Available Abstract The academic literature presents a research-practice gap on the application of decision support tools to address tactical planning problems in real-world organisations. This paper addresses this gap and extends a previous action research relative to an optimisation model applied for tactical capacity planning in an engineer-to-order industrial setting. The issues discussed herein raise new insights to better understand the practical results that can be achieved through the proposed model. The topics presented include the modelling of objectives, the representation of the production process and the costing approach, as well as findings regarding managerial decisions and the scope of action considered. These insights may inspire ideas to academics and practitioners when developing tools for capacity planning problems in similar contexts.

  3. Towards a 3d Spatial Urban Energy Modelling Approach

    Science.gov (United States)

    Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.

    2013-09-01

    Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies

  4. Evaluation of approaches focused on modelling of organic carbon stocks using the RothC model

    Science.gov (United States)

    Koco, Štefan; Skalský, Rastislav; Makovníková, Jarmila; Tarasovičová, Zuzana; Barančíková, Gabriela

    2014-05-01

    The aim of current efforts in the European area is the protection of soil organic matter, which is included in all relevant documents related to the protection of soil. The use of modelling of organic carbon stocks for anticipated climate change, respectively for land management can significantly help in short and long-term forecasting of the state of soil organic matter. RothC model can be applied in the time period of several years to centuries and has been tested in long-term experiments within a large range of soil types and climatic conditions in Europe. For the initialization of the RothC model, knowledge about the carbon pool sizes is essential. Pool size characterization can be obtained from equilibrium model runs, but this approach is time consuming and tedious, especially for larger scale simulations. Due to this complexity we search for new possibilities how to simplify and accelerate this process. The paper presents a comparison of two approaches for SOC stocks modelling in the same area. The modelling has been carried out on the basis of unique input of land use, management and soil data for each simulation unit separately. We modeled 1617 simulation units of 1x1 km grid on the territory of agroclimatic region Žitný ostrov in the southwest of Slovakia. The first approach represents the creation of groups of simulation units based on the evaluation of results for simulation unit with similar input values. The groups were created after the testing and validation of modelling results for individual simulation units with results of modelling the average values of inputs for the whole group. Tests of equilibrium model for interval in the range 5 t.ha-1 from initial SOC stock showed minimal differences in results comparing with result for average value of whole interval. Management inputs data from plant residues and farmyard manure for modelling of carbon turnover were also the same for more simulation units. Combining these groups (intervals of initial

  5. A new enhanced index tracking model in portfolio optimization with sum weighted approach

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng

    2017-04-01

    Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.

  6. Self-Similar Spin Images for Point Cloud Matching

    Science.gov (United States)

    Pulido, Daniel

    based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  7. Model-free information-theoretic approach to infer leadership in pairs of zebrafish.

    Science.gov (United States)

    Butail, Sachit; Mwaffo, Violet; Porfiri, Maurizio

    2016-04-01

    Collective behavior affords several advantages to fish in avoiding predators, foraging, mating, and swimming. Although fish schools have been traditionally considered egalitarian superorganisms, a number of empirical observations suggest the emergence of leadership in gregarious groups. Detecting and classifying leader-follower relationships is central to elucidate the behavioral and physiological causes of leadership and understand its consequences. Here, we demonstrate an information-theoretic approach to infer leadership from positional data of fish swimming. In this framework, we measure social interactions between fish pairs through the mathematical construct of transfer entropy, which quantifies the predictive power of a time series to anticipate another, possibly coupled, time series. We focus on the zebrafish model organism, which is rapidly emerging as a species of choice in preclinical research for its genetic similarity to humans and reduced neurobiological complexity with respect to mammals. To overcome experimental confounds and generate test data sets on which we can thoroughly assess our approach, we adapt and calibrate a data-driven stochastic model of zebrafish motion for the simulation of a coupled dynamical system of zebrafish pairs. In this synthetic data set, the extent and direction of the coupling between the fish are systematically varied across a wide parameter range to demonstrate the accuracy and reliability of transfer entropy in inferring leadership. Our approach is expected to aid in the analysis of collective behavior, providing a data-driven perspective to understand social interactions.

  8. Statistical Modeling Approach to Quantitative Analysis of Interobserver Variability in Breast Contouring

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jinzhong, E-mail: jyang4@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Woodward, Wendy A.; Reed, Valerie K.; Strom, Eric A.; Perkins, George H.; Tereffe, Welela; Buchholz, Thomas A. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Zhang, Lifei; Balter, Peter; Court, Laurence E. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li, X. Allen [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin (United States); Dong, Lei [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Scripps Proton Therapy Center, San Diego, California (United States)

    2014-05-01

    Purpose: To develop a new approach for interobserver variability analysis. Methods and Materials: Eight radiation oncologists specializing in breast cancer radiation therapy delineated a patient's left breast “from scratch” and from a template that was generated using deformable image registration. Three of the radiation oncologists had previously received training in Radiation Therapy Oncology Group consensus contouring for breast cancer atlas. The simultaneous truth and performance level estimation algorithm was applied to the 8 contours delineated “from scratch” to produce a group consensus contour. Individual Jaccard scores were fitted to a beta distribution model. We also applied this analysis to 2 or more patients, which were contoured by 9 breast radiation oncologists from 8 institutions. Results: The beta distribution model had a mean of 86.2%, standard deviation (SD) of ±5.9%, a skewness of −0.7, and excess kurtosis of 0.55, exemplifying broad interobserver variability. The 3 RTOG-trained physicians had higher agreement scores than average, indicating that their contours were close to the group consensus contour. One physician had high sensitivity but lower specificity than the others, which implies that this physician tended to contour a structure larger than those of the others. Two other physicians had low sensitivity but specificity similar to the others, which implies that they tended to contour a structure smaller than the others. With this information, they could adjust their contouring practice to be more consistent with others if desired. When contouring from the template, the beta distribution model had a mean of 92.3%, SD ± 3.4%, skewness of −0.79, and excess kurtosis of 0.83, which indicated a much better consistency among individual contours. Similar results were obtained for the analysis of 2 additional patients. Conclusions: The proposed statistical approach was able to measure interobserver variability quantitatively

  9. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  10. A Bayesian approach for quantification of model uncertainty

    International Nuclear Information System (INIS)

    Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.

    2010-01-01

    In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.

  11. Self-similarities of periodic structures for a discrete model of a two-gene system

    International Nuclear Information System (INIS)

    Souza, S.L.T. de; Lima, A.A.; Caldas, I.L.; Medrano-T, R.O.; Guimarães-Filho, Z.O.

    2012-01-01

    We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.

  12. Self-similarities of periodic structures for a discrete model of a two-gene system

    Energy Technology Data Exchange (ETDEWEB)

    Souza, S.L.T. de, E-mail: thomaz@ufsj.edu.br [Departamento de Física e Matemática, Universidade Federal de São João del-Rei, Ouro Branco, MG (Brazil); Lima, A.A. [Escola de Farmácia, Universidade Federal de Ouro Preto, Ouro Preto, MG (Brazil); Caldas, I.L. [Instituto de Física, Universidade de São Paulo, São Paulo, SP (Brazil); Medrano-T, R.O. [Departamento de Ciências Exatas e da Terra, Universidade Federal de São Paulo, Diadema, SP (Brazil); Guimarães-Filho, Z.O. [Aix-Marseille Univ., CNRS PIIM UMR6633, International Institute for Fusion Science, Marseille (France)

    2012-03-12

    We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.

  13. Comparative review of three cost-effectiveness models for rotavirus vaccines in national immunization programs; a generic approach applied to various regions in the world

    Directory of Open Access Journals (Sweden)

    Tu Hong-Anh

    2011-07-01

    Full Text Available Abstract Background This study aims to critically review available cost-effectiveness models for rotavirus vaccination, compare their designs using a standardized approach and compare similarities and differences in cost-effectiveness outcomes using a uniform set of input parameters. Methods We identified various models used to estimate the cost-effectiveness of rotavirus vaccination. From these, results using a standardized dataset for four regions in the world could be obtained for three specific applications. Results Despite differences in the approaches and individual constituting elements including costs, QALYs Quality Adjusted Life Years and deaths, cost-effectiveness results of the models were quite similar. Differences between the models on the individual components of cost-effectiveness could be related to some specific features of the respective models. Sensitivity analysis revealed that cost-effectiveness of rotavirus vaccination is highly sensitive to vaccine prices, rotavirus-associated mortality and discount rates, in particular that for QALYs. Conclusions The comparative approach followed here is helpful in understanding the various models selected and will thus benefit (low-income countries in designing their own cost-effectiveness analyses using new or adapted existing models. Potential users of the models in low and middle income countries need to consider results from existing studies and reviews. There will be a need for contextualization including the use of country specific data inputs. However, given that the underlying biological and epidemiological mechanisms do not change between countries, users are likely to be able to adapt existing model designs rather than developing completely new approaches. Also, the communication established between the individual researchers involved in the three models is helpful in the further development of these individual models. Therefore, we recommend that this kind of comparative study

  14. An algebraic approach to modeling in software engineering

    International Nuclear Information System (INIS)

    Loegel, C.J.; Ravishankar, C.V.

    1993-09-01

    Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form

  15. Atomistic approach for modeling metal-semiconductor interfaces

    DEFF Research Database (Denmark)

    Stradi, Daniele; Martinez, Umberto; Blom, Anders

    2016-01-01

    realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via the I–V curve. In particular, it will be demonstrated how doping — and bias — modifies the Schottky barrier, and how finite size models (the slab approach) are unable to describe these interfaces......We present a general framework for simulating interfaces using an atomistic approach based on density functional theory and non-equilibrium Green's functions. The method includes all the relevant ingredients, such as doping and an accurate value of the semiconductor band gap, required to model...

  16. Multi-model approach to characterize human handwriting motion.

    Science.gov (United States)

    Chihi, I; Abdelkrim, A; Benrejeb, M

    2016-02-01

    This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.

  17. Nonlinear Modeling of the PEMFC Based On NNARX Approach

    OpenAIRE

    Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo

    2015-01-01

    Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...

  18. Understanding Gulf War Illness: An Integrative Modeling Approach

    Science.gov (United States)

    2017-10-01

    using a novel mathematical model. The computational biology approach will enable the consortium to quickly identify targets of dysfunction and find... computer / mathematical paradigms for evaluation of treatment strategies 12-30 50% Develop pilot clinical trials on basis of animal studies 24-36 60...the goal of testing chemical treatments. The immune and autonomic biomarkers will be tested using a computational modeling approach allowing for a

  19. Construction of 2D quasi-periodic Rauzy tiling by similarity transformation

    International Nuclear Information System (INIS)

    Zhuravlev, V. G.; Maleev, A. V.

    2009-01-01

    A new approach to constructing self-similar fractal tilings is proposed based on the construction of semigroups generated by a finite set of similarity transformations. The Rauzy tiling-a 2D analog of 1D Fibonacci tiling generated by the golden mean-is used as an example to illustrate this approach. It is shown that the Rauzy torus development and the elementary fractal boundary of Rauzy tiling can be constructed in the form of a set of centers of similarity semigroups generated by two and three similarity transformations, respectively. A centrosymmetric tiling, locally dual to the Rauzy tiling, is constructed for the first time and its parameterization is developed.

  20. Common neighbour structure and similarity intensity in complex networks

    Science.gov (United States)

    Hou, Lei; Liu, Kecheng

    2017-10-01

    Complex systems as networks always exhibit strong regularities, implying underlying mechanisms governing their evolution. In addition to the degree preference, the similarity has been argued to be another driver for networks. Assuming a network is randomly organised without similarity preference, the present paper studies the expected number of common neighbours between vertices. A symmetrical similarity index is accordingly developed by removing such expected number from the observed common neighbours. The developed index can not only describe the similarities between vertices, but also the dissimilarities. We further apply the proposed index to measure of the influence of similarity on the wring patterns of networks. Fifteen empirical networks as well as artificial networks are examined in terms of similarity intensity and degree heterogeneity. Results on real networks indicate that, social networks are strongly governed by the similarity as well as the degree preference, while the biological networks and infrastructure networks show no apparent similarity governance. Particularly, classical network models, such as the Barabási-Albert model, the Erdös-Rényi model and the Ring Lattice, cannot well describe the social networks in terms of the degree heterogeneity and similarity intensity. The findings may shed some light on the modelling and link prediction of different classes of networks.

  1. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  2. Polynomial Chaos Expansion Approach to Interest Rate Models

    Directory of Open Access Journals (Sweden)

    Luca Di Persio

    2015-01-01

    Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.

  3. Similarity and self-similarity in high energy density physics: application to laboratory astrophysics

    International Nuclear Information System (INIS)

    Falize, E.

    2008-10-01

    The spectacular recent development of powerful facilities allows the astrophysical community to explore, in laboratory, astrophysical phenomena where radiation and matter are strongly coupled. The titles of the nine chapters of the thesis are: from high energy density physics to laboratory astrophysics; Lie groups, invariance and self-similarity; scaling laws and similarity properties in High-Energy-Density physics; the Burgan-Feix-Munier transformation; dynamics of polytropic gases; stationary radiating shocks and the POLAR project; structure, dynamics and stability of optically thin fluids; from young star jets to laboratory jets; modelling and experiences for laboratory jets

  4. An application of ensemble/multi model approach for wind power production forecast.

    Science.gov (United States)

    Alessandrini, S.; Decimi, G.; Hagedorn, R.; Sperati, S.

    2010-09-01

    The wind power forecast of the 3 days ahead period are becoming always more useful and important in reducing the problem of grid integration and energy price trading due to the increasing wind power penetration. Therefore it's clear that the accuracy of this forecast is one of the most important requirements for a successful application. The wind power forecast is based on a mesoscale meteorological models that provides the 3 days ahead wind data. A Model Output Statistic correction is then performed to reduce systematic error caused, for instance, by a wrong representation of surface roughness or topography in the meteorological models. The corrected wind data are then used as input in the wind farm power curve to obtain the power forecast. These computations require historical time series of wind measured data (by an anemometer located in the wind farm or on the nacelle) and power data in order to be able to perform the statistical analysis on the past. For this purpose a Neural Network (NN) is trained on the past data and then applied in the forecast task. Considering that the anemometer measurements are not always available in a wind farm a different approach has also been adopted. A training of the NN to link directly the forecasted meteorological data and the power data has also been performed. The normalized RMSE forecast error seems to be lower in most cases by following the second approach. We have examined two wind farms, one located in Denmark on flat terrain and one located in a mountain area in the south of Italy (Sicily). In both cases we compare the performances of a prediction based on meteorological data coming from a single model with those obtained by using two or more models (RAMS, ECMWF deterministic, LAMI, HIRLAM). It is shown that the multi models approach reduces the day-ahead normalized RMSE forecast error of at least 1% compared to the singles models approach. Moreover the use of a deterministic global model, (e.g. ECMWF deterministic

  5. A Bayesian inverse modeling approach to estimate soil hydraulic properties of a toposequence in southeastern Amazonia.

    Science.gov (United States)

    Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel

    2016-04-01

    Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when

  6. Do recommender systems benefit users? a modeling approach

    Science.gov (United States)

    Yeung, Chi Ho

    2016-04-01

    Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.

  7. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  8. Eutrophication Modeling Using Variable Chlorophyll Approach

    International Nuclear Information System (INIS)

    Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.

    2016-01-01

    In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.

  9. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  10. Modeling amorphization of tetrahedral structures under local approaches

    International Nuclear Information System (INIS)

    Jesurum, C.E.; Pulim, V.; Berger, B.; Hobbs, L.W.

    1997-01-01

    Many crystalline ceramics can be topologically disordered (amorphized) by disordering radiation events involving high-energy collision cascades or (in some cases) successive single-atom displacements. The authors are interested in both the potential for disorder and the possible aperiodic structures adopted following the disordering event. The potential for disordering is related to connectivity, and among those structures of interest are tetrahedral networks (such as SiO 2 , SiC and Si 3 N 4 ) comprising corner-shared tetrahedral units whose connectivities are easily evaluated. In order to study the response of these networks to radiation, the authors have chosen to model their assembly according to the (simple) local rules that each corner obeys in connecting to another tetrahedron; in this way they easily erect large computer models of any crystalline polymorphic form. Amorphous structures can be similarly grown by application of altered rules. They have adopted a simple model of irradiation in which all bonds in the neighborhood of a designated tetrahedron are destroyed, and they reform the bonds in this region according to a set of (possibly different) local rules appropriate to the environmental conditions. When a tetrahedron approaches the boundary of this neighborhood, it undergoes an optimization step in which a spring is inserted between two corners of compatible tetrahedra when they are within a certain distance of one another; component forces are then applied that act to minimize the distance between these corners and minimize the deviation from the rules. The resulting structure is then analyzed for the complete adjacency matrix, irreducible ring statistics, and bond angle distributions

  11. Lightweight approach to model traceability in a CASE tool

    Science.gov (United States)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  12. Modeling of similar economies

    Directory of Open Access Journals (Sweden)

    Sergey B. Kuznetsov

    2017-06-01

    Full Text Available Objective to obtain dimensionless criteria ndash economic indices characterizing the national economy and not depending on its size. Methods mathematical modeling theory of dimensions processing statistical data. Results basing on differential equations describing the national economy with the account of economical environment resistance two dimensionless criteria are obtained which allow to compare economies regardless of their sizes. With the theory of dimensions we show that the obtained indices are not accidental. We demonstrate the implementation of the obtained dimensionless criteria for the analysis of behavior of certain countriesrsquo economies. Scientific novelty the dimensionless criteria are obtained ndash economic indices which allow to compare economies regardless of their sizes and to analyze the dynamic changes in the economies with time. nbsp Practical significance the obtained results can be used for dynamic and comparative analysis of different countriesrsquo economies regardless of their sizes.

  13. Multiscale modeling of alloy solidification using a database approach

    Science.gov (United States)

    Tan, Lijian; Zabaras, Nicholas

    2007-11-01

    A two-scale model based on a database approach is presented to investigate alloy solidification. Appropriate assumptions are introduced to describe the behavior of macroscopic temperature, macroscopic concentration, liquid volume fraction and microstructure features. These assumptions lead to a macroscale model with two unknown functions: liquid volume fraction and microstructure features. These functions are computed using information from microscale solutions of selected problems. This work addresses the selection of sample problems relevant to the interested problem and the utilization of data from the microscale solution of the selected sample problems. A computationally efficient model, which is different from the microscale and macroscale models, is utilized to find relevant sample problems. In this work, the computationally efficient model is a sharp interface solidification model of a pure material. Similarities between the sample problems and the problem of interest are explored by assuming that the liquid volume fraction and microstructure features are functions of solution features extracted from the solution of the computationally efficient model. The solution features of the computationally efficient model are selected as the interface velocity and thermal gradient in the liquid at the time the sharp solid-liquid interface passes through. An analytical solution of the computationally efficient model is utilized to select sample problems relevant to solution features obtained at any location of the domain of the problem of interest. The microscale solution of selected sample problems is then utilized to evaluate the two unknown functions (liquid volume fraction and microstructure features) in the macroscale model. The temperature solution of the macroscale model is further used to improve the estimation of the liquid volume fraction and microstructure features. Interpolation is utilized in the feature space to greatly reduce the number of required

  14. Propriedades termofísicas de soluções modelo similares a sucos - Parte I Thermophysical properties of model solutions similar to juice - Part I

    Directory of Open Access Journals (Sweden)

    Silvia Cristina Sobottka Rolim de Moura

    2003-04-01

    Full Text Available Propriedades termofísicas, difusividade térmica e calor específico, de soluções modelo similares a sucos, foram determinadas experimentalmente e ajustadas a modelos matemáticos (STATISTICA 6.0, em função da sua composição química. Para definição das soluções modelo foi realizado um planejamento estrela mantendo-se fixa a quantidade de ácido (1,5% e variando-se a água (82-98,5%, o carboidrato (0-15% e a gordura (0-1,5%. A determinação do calor específico foi realizada através do método de Hwang & Hayakawa e a difusividade térmica com base no método de Dickerson. Os resultados de cada propriedade foram analisados através de superfícies de respostas. Foram encontrados resultados significativos para as propriedades, mostrando que os modelos encontrados representam significativamente as mudanças das propriedades térmicas dos sucos, com alterações na composição e na temperatura.Thermophysical properties, thermal diffusivity and specific heat of model solutions similar to juices were experimentally determined and the values obtained compared to those predicted by mathematical models (STATISTIC 6.0 and to values mentioned in the literature, according to the chemical composition. It was adopted a star planning to define the composition of the model solutions fixing the acid amount in 1.5% and varying water (82-98.5%, carboydrate (0-15% and fat (0-1.5%. The specific heat was determined by Hwang & Hayakawa method and the thermal diffusivity was determined by Dickerson method. The results of each property were analysed by the response surface method. The results were significative, indicating that the models represented considerably the changes of thermal properties of juices according to their composition and temperature variations.

  15. Propriedades termofísicas de soluções-modelo similares a sucos: parte II Thermophysical properties of model solutions similar to juice: part II

    Directory of Open Access Journals (Sweden)

    Sílvia Cristina Sobottka Rolim de Moura

    2005-09-01

    Full Text Available Propriedades termofísicas, densidade e viscosidade de soluções-modelo similares a sucos foram determinadas experimentalmente. Os resultados foram comparados aos preditos por modelos matemáticos (STATISTICA 6.0 e obtidos da literatura em função da sua composição química. Para definição das soluções-modelo, foi realizado um planejamento estrela, mantendo-se fixa a quanti-dade de ácido (1,5% e variando-se a água (82-98,5%, o carboidrato (0-15% e a gordura (0-1,5%. A densidade foi determinada em picnômetro. A viscosidade foi determinada em viscosímetro Brookfield modelo LVF. A condutividade térmica foi calculada com o conhecimento das propriedades difusividade térmica e calor específico (apresentados na Parte I deste trabalho MOURA [7] e da densidade. Os resultados de cada propriedade foram analisados através de superfícies de respostas. Foram encontrados resultados significativos para as propriedades, mostrando que os modelos encontrados representam as mudanças das propriedades térmicas e físicas dos sucos, com alterações na composição e na temperatura.Thermophysical properties, density and viscosity of model solutions similar to juices were experimentally determined. The results were compared to those predicted by mathematical models (STATISTIC 6.0 and to values mentioned in the literature, according to the chemical composition. A star planning was adopted to define model solutions composition; fixing the acid amount in 1.5% and varying water (82-98.5%, carbohydrate (0-15% and fat (0-1.5%. The density was determined by picnometer. The viscosity was determined by Brookfield LVF model viscosimeter. The thermal conductivity was calculated based on thermal diffusivity and specific heat values (presented at the 1st . Part of this paper - MOURA [7] and density. The results of each property were analyzed by the response surface method. The found results were significant, indicating that the models represent the changes of

  16. Environmental niche models for riverine desert fishes and their similarity according to phylogeny and functionality

    Science.gov (United States)

    Whitney, James E.; Whittier, Joanna B.; Paukert, Craig P.

    2017-01-01

    Environmental filtering and competitive exclusion are hypotheses frequently invoked in explaining species' environmental niches (i.e., geographic distributions). A key assumption in both hypotheses is that the functional niche (i.e., species traits) governs the environmental niche, but few studies have rigorously evaluated this assumption. Furthermore, phylogeny could be associated with these hypotheses if it is predictive of functional niche similarity via phylogenetic signal or convergent evolution, or of environmental niche similarity through phylogenetic attraction or repulsion. The objectives of this study were to investigate relationships between environmental niches, functional niches, and phylogenies of fishes of the Upper (UCRB) and Lower (LCRB) Colorado River Basins of southwestern North America. We predicted that functionally similar species would have similar environmental niches (i.e., environmental filtering) and that closely related species would be functionally similar (i.e., phylogenetic signal) and possess similar environmental niches (i.e., phylogenetic attraction). Environmental niches were quantified using environmental niche modeling, and functional similarity was determined using functional trait data. Nonnatives in the UCRB provided the only support for environmental filtering, which resulted from several warmwater nonnatives having dam number as a common predictor of their distributions, whereas several cool- and coldwater nonnatives shared mean annual air temperature as an important distributional predictor. Phylogenetic signal was supported for both natives and nonnatives in both basins. Lastly, phylogenetic attraction was only supported for native fishes in the LCRB and for nonnative fishes in the UCRB. Our results indicated that functional similarity was heavily influenced by evolutionary history, but that phylogenetic relationships and functional traits may not always predict the environmental distribution of species. However, the

  17. Smeared crack modelling approach for corrosion-induced concrete damage

    DEFF Research Database (Denmark)

    Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik

    2017-01-01

    In this paper a smeared crack modelling approach is used to simulate corrosion-induced damage in reinforced concrete. The presented modelling approach utilizes a thermal analogy to mimic the expansive nature of solid corrosion products, while taking into account the penetration of corrosion...... products into the surrounding concrete, non-uniform precipitation of corrosion products, and creep. To demonstrate the applicability of the presented modelling approach, numerical predictions in terms of corrosion-induced deformations as well as formation and propagation of micro- and macrocracks were......-induced damage phenomena in reinforced concrete. Moreover, good agreements were also found between experimental and numerical data for corrosion-induced deformations along the circumference of the reinforcement....

  18. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  19. Similarity Analysis for Reactor Flow Distribution Test and Its Validation

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon; Ha, Jung Hui [Heungdeok IT Valley, Yongin (Korea, Republic of); Lee, Taehoo; Han, Ji Woong [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The newly derived dimensionless groups are slightly different from Hetsroni's. Reynolds number, relative wall roughness, and Euler don't appear, instead, friction factor appears newly. In order to conserve friction factor Reynolds number and relative wall roughness should be conserved. Since the effect of Reynolds number in high range is small, and since the scaled model is far smaller than prototype the conservation of friction factor is easily obtained by making the model wall just smooth. It is much easier to implement the test design than Hetsroni's because the Reynolds number and relative wall roughness do not appear explicitly. In case that there is no free surface within the interested domain of the reactor, the gravity is of second importance, and in this case the pressure drops should be compensated for in order to compare them between prototype and model. The gravity head compensated pressure drop is directly same to the measured value by a differential pressure transmitter. In order to conserve the gravity effect Froude number should be conserved. In pool type SFR (Sodium Cooled Fast Reactor) there exists liquid level difference, and if the level difference is desired to be conserved, the Froude number should be conserved. Euler number, which represents pressure terms in momentum equation, should be well conserved according to Hetsroni's approach. It is not a wrong statement, but it should be noted that Euler number is NOT an independent variable BUT a dependent variable according to Hong et al. It means that if all the geometrical similarity and the dimensionless numbers are conserved, Euler number is automatically conserved. So Euler number need not be considered in case that the perfect geometrical similarity is kept. However, even in case that the geometrical similarity is not conserved, it possible to conserved the velocity field similarity by just conserve Euler number. It gives tolerance to the engineer who designs the test

  20. An Alternative Approach to the Extended Drude Model

    Science.gov (United States)

    Gantzler, N. J.; Dordevic, S. V.

    2018-05-01

    The original Drude model, proposed over a hundred years ago, is still used today for the analysis of optical properties of solids. Within this model, both the plasma frequency and quasiparticle scattering rate are constant, which makes the model rather inflexible. In order to circumvent this problem, the so-called extended Drude model was proposed, which allowed for the frequency dependence of both the quasiparticle scattering rate and the effective mass. In this work we will explore an alternative approach to the extended Drude model. Here, one also assumes that the quasiparticle scattering rate is frequency dependent; however, instead of the effective mass, the plasma frequency becomes frequency-dependent. This alternative model is applied to the high Tc superconductor Bi2Sr2CaCu2O8+δ (Bi2212) with Tc = 92 K, and the results are compared and contrasted with the ones obtained from the conventional extended Drude model. The results point to several advantages of this alternative approach to the extended Drude model.

  1. Similarity-based recommendation of new concepts to a terminology

    NARCIS (Netherlands)

    Chandar, Praveen; Yaman, Anil; Hoxha, Julia; He, Zhe; Weng, Chunhua

    2015-01-01

    Terminologies can suffer from poor concept coverage due to delays in addition of new concepts. This study tests a similarity-based approach to recommending concepts from a text corpus to a terminology. Our approach involves extraction of candidate concepts from a given text corpus, which are

  2. Interactive exploration of the vulnerability of the human infrastructure: an approach using simultaneous display of similar locations

    Science.gov (United States)

    Ceré, Raphaël; Kaiser, Christian

    2015-04-01

    models (DEM) or individual building vector layers. Morphological properties can be calculated for different scales using different moving window sizes. Multi-scale measures such as fractal or lacunarity can be integrated into the analysis. Other properties such as different densities and ratios are also easy to calculate and include. Based on a rather extensive set of properties or features, a feature selection or extraction method such as Principal Component Analysis can be used to obtain a subset of relevant properties. In a second step, an unsupervised classification algorithm such as Self-Organizing Maps can be used to group similar locations together, and criteria such as the intra-group distance and geographic distribution can be used for selecting relevant locations to be displayed in an interactive data exploration interface along with a given main location. A case study for a part of Switzerland illustrates the presented approach within a working interactive tool, showing the feasibility and allowing for an investigation of the usefulness of our method.

  3. Vertex labeling and routing in self-similar outerplanar unclustered graphs modeling complex networks

    International Nuclear Information System (INIS)

    Comellas, Francesc; Miralles, Alicia

    2009-01-01

    This paper introduces a labeling and optimal routing algorithm for a family of modular, self-similar, small-world graphs with clustering zero. Many properties of this family are comparable to those of networks associated with technological and biological systems with low clustering, such as the power grid, some electronic circuits and protein networks. For these systems, the existence of models with an efficient routing protocol is of interest to design practical communication algorithms in relation to dynamical processes (including synchronization) and also to understand the underlying mechanisms that have shaped their particular structure.

  4. An object-oriented approach to energy-economic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wise, M.A.; Fox, J.A.; Sands, R.D.

    1993-12-01

    In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.

  5. Personalized mortality prediction driven by electronic medical data and a patient similarity metric.

    Science.gov (United States)

    Lee, Joon; Maslove, David M; Dubin, Joel A

    2015-01-01

    Clinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1) to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2) to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made. We deployed a cosine-similarity-based patient similarity metric (PSM) to an intensive care unit (ICU) database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care. The present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR) systems, our novel medical data analytics contributes to

  6. Personalized mortality prediction driven by electronic medical data and a patient similarity metric.

    Directory of Open Access Journals (Sweden)

    Joon Lee

    Full Text Available Clinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1 to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2 to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made.We deployed a cosine-similarity-based patient similarity metric (PSM to an intensive care unit (ICU database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care.The present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR systems, our novel medical data analytics

  7. Multiscale approach to equilibrating model polymer melts

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Ali Karimi-Varzaneh, Hossein; Hojdis, Nils

    2016-01-01

    We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed...

  8. Similarity-based pattern analysis and recognition

    CERN Document Server

    Pelillo, Marcello

    2013-01-01

    This accessible text/reference presents a coherent overview of the emerging field of non-Euclidean similarity learning. The book presents a broad range of perspectives on similarity-based pattern analysis and recognition methods, from purely theoretical challenges to practical, real-world applications. The coverage includes both supervised and unsupervised learning paradigms, as well as generative and discriminative models. Topics and features: explores the origination and causes of non-Euclidean (dis)similarity measures, and how they influence the performance of traditional classification alg

  9. An efficient similarity measure technique for medical image registration

    Indian Academy of Sciences (India)

    In this paper, an efficient similarity measure technique is proposed for medical image registration. The proposed approach is based on the Gerschgorin circles theorem. In this approach, image registration is carried out by considering Gerschgorin bounds of a covariance matrix of two compared images with normalized ...

  10. A novel approach to modeling and diagnosing the cardiovascular system

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)

    1995-07-01

    A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  11. Scaling mimesis: Morphometric and ecomorphological similarities in three sympatric plant-mimetic fish of the family Carangidae (Teleostei).

    Science.gov (United States)

    Queiroz, Alexya Cunha de; Vallinoto, Marcelo; Sakai, Yoichi; Giarrizzo, Tommaso; Barros, Breno

    2018-01-01

    The mimetic juveniles of a number of carangid fish species resemble plant parts floating near the water surface, such as leaves, seeds and other plant debris. The present study is the first to verify the morphological similarities and ecomorphological relationships between three carangids (Oligoplites saurus, Oligoplites palometa and Trachinotus falcatus) and their associated plant models. Behavioral observations were conducted in the estuary of Curuçá River, in northeastern Pará (Brazil) between August 2015 and July 2016. Individual fishes and associated floating objects (models) were sampled for comparative analysis using both geometric and morphometric approaches. While the mimetic fish and their models retain their own distinct, intrinsic morphological features, a high degree of morphological similarity was found between each fish species and its model. The morphometric analyses revealed a general tendency of isometric development in all three fish species, probably related to their pelagic habitats, during all ontogenetic stages.

  12. Comparative review of three cost-effectiveness models for rotavirus vaccines in national immunization programs; a generic approach applied to various regions in the world

    NARCIS (Netherlands)

    Postma, Maarten J.; Jit, Mark; Rozenbaum, Mark H.; Standaert, Baudouin; Tu, Hong-Anh; Hutubessy, Raymond C. W.

    2011-01-01

    Background: This study aims to critically review available cost-effectiveness models for rotavirus vaccination, compare their designs using a standardized approach and compare similarities and differences in cost-effectiveness outcomes using a uniform set of input parameters. Methods: We identified

  13. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  14. Modeling flash floods in ungauged mountain catchments of China: A decision tree learning approach for parameter regionalization

    Science.gov (United States)

    Ragettli, S.; Zhou, J.; Wang, H.; Liu, C.; Guo, L.

    2017-12-01

    Flash floods in small mountain catchments are one of the most frequent causes of loss of life and property from natural hazards in China. Hydrological models can be a useful tool for the anticipation of these events and the issuing of timely warnings. One of the main challenges of setting up such a system is finding appropriate model parameter values for ungauged catchments. Previous studies have shown that the transfer of parameter sets from hydrologically similar gauged catchments is one of the best performing regionalization methods. However, a remaining key issue is the identification of suitable descriptors of similarity. In this study, we use decision tree learning to explore parameter set transferability in the full space of catchment descriptors. For this purpose, a semi-distributed rainfall-runoff model is set up for 35 catchments in ten Chinese provinces. Hourly runoff data from in total 858 storm events are used to calibrate the model and to evaluate the performance of parameter set transfers between catchments. We then present a novel technique that uses the splitting rules of classification and regression trees (CART) for finding suitable donor catchments for ungauged target catchments. The ability of the model to detect flood events in assumed ungauged catchments is evaluated in series of leave-one-out tests. We show that CART analysis increases the probability of detection of 10-year flood events in comparison to a conventional measure of physiographic-climatic similarity by up to 20%. Decision tree learning can outperform other regionalization approaches because it generates rules that optimally consider spatial proximity and physical similarity. Spatial proximity can be used as a selection criteria but is skipped in the case where no similar gauged catchments are in the vicinity. We conclude that the CART regionalization concept is particularly suitable for implementation in sparsely gauged and topographically complex environments where a proximity

  15. An electrophysiological signature of summed similarity in visual working memory.

    Science.gov (United States)

    van Vugt, Marieke K; Sekuler, Robert; Wilson, Hugh R; Kahana, Michael J

    2013-05-01

    Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory experiments using electrocorticographic/depth electrode recordings and scalp electroencephalography. On each experimental trial, participants judged whether a test face had been among a small set of recently studied faces. Consistent with summed-similarity theory, participants' tendency to endorse a test item increased as a function of its summed similarity to the items on the just-studied list. To characterize this behavioral effect of summed similarity, we successfully fit a summed-similarity model to individual participant data from each experiment. Using the parameters determined from fitting the summed-similarity model to the behavioral data, we examined the relation between summed similarity and brain activity. We found that 4-9 Hz theta activity in the medial temporal lobe and 2-4 Hz delta activity recorded from frontal and parietal cortices increased with summed similarity. These findings demonstrate direct neural correlates of the similarity computations that form the foundation of several major cognitive theories of human recognition memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  16. A dual model approach to ground water recovery trench design

    International Nuclear Information System (INIS)

    Clodfelter, C.L.; Crouch, M.S.

    1992-01-01

    The design of trenches for contaminated ground water recovery must consider several variables. This paper presents a dual-model approach for effectively recovering contaminated ground water migrating toward a trench by advection. The approach involves an analytical model to determine the vertical influence of the trench and a numerical flow model to determine the capture zone within the trench and the surrounding aquifer. The analytical model is utilized by varying trench dimensions and head values to design a trench which meets the remediation criteria. The numerical flow model is utilized to select the type of backfill and location of sumps within the trench. The dual-model approach can be used to design a recovery trench which effectively captures advective migration of contaminants in the vertical and horizontal planes

  17. A Synergistic Approach for Evaluating Climate Model Output for Ecological Applications

    Directory of Open Access Journals (Sweden)

    Rachel D. Cavanagh

    2017-09-01

    Full Text Available Increasing concern about the impacts of climate change on ecosystems is prompting ecologists and ecosystem managers to seek reliable projections of physical drivers of change. The use of global climate models in ecology is growing, although drawing ecologically meaningful conclusions can be problematic. The expertise required to access and interpret output from climate and earth system models is hampering progress in utilizing them most effectively to determine the wider implications of climate change. To address this issue, we present a joint approach between climate scientists and ecologists that explores key challenges and opportunities for progress. As an exemplar, our focus is the Southern Ocean, notable for significant change with global implications, and on sea ice, given its crucial role in this dynamic ecosystem. We combined perspectives to evaluate the representation of sea ice in global climate models. With an emphasis on ecologically-relevant criteria (sea ice extent and seasonality we selected a subset of eight models that reliably reproduce extant sea ice distributions. While the model subset shows a similar mean change to the full ensemble in sea ice extent (approximately 50% decline in winter and 30% decline in summer, there is a marked reduction in the range. This improved the precision of projected future sea ice distributions by approximately one third, and means they are more amenable to ecological interpretation. We conclude that careful multidisciplinary evaluation of climate models, in conjunction with ongoing modeling advances, should form an integral part of utilizing model output.

  18. An ontology-based approach for modelling architectural styles

    OpenAIRE

    Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm

    2007-01-01

    peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...

  19. Footprint-weighted tile approach for a spruce forest and a nearby patchy clearing using the ACASA model

    Science.gov (United States)

    Gatzsche, Kathrin; Babel, Wolfgang; Falge, Eva; Pyles, Rex David; Tha Paw U, Kyaw; Raabe, Armin; Foken, Thomas

    2018-05-01

    The ACASA (Advanced Canopy-Atmosphere-Soil Algorithm) model, with a higher-order closure for tall vegetation, has already been successfully tested and validated for homogeneous spruce forests. The aim of this paper is to test the model using a footprint-weighted tile approach for a clearing with a heterogeneous structure of the underlying surface. The comparison with flux data shows a good agreement with a footprint-aggregated tile approach of the model. However, the results of a comparison with a tile approach on the basis of the mean land use classification of the clearing is not significantly different. It is assumed that the footprint model is not accurate enough to separate small-scale heterogeneities. All measured fluxes are corrected by forcing the energy balance closure of the test data either by maintaining the measured Bowen ratio or by the attribution of the residual depending on the fractions of sensible and latent heat flux to the buoyancy flux. The comparison with the model, in which the energy balance is closed, shows that the buoyancy correction for Bowen ratios > 1.5 better fits the measured data. For lower Bowen ratios, the correction probably lies between the two methods, but the amount of available data was too small to make a conclusion. With an assumption of similarity between water and carbon dioxide fluxes, no correction of the net ecosystem exchange is necessary for Bowen ratios > 1.5.

  20. Alternative policy impacts on US GHG emissions and energy security: A hybrid modeling approach

    International Nuclear Information System (INIS)

    Sarica, Kemal; Tyner, Wallace E.

    2013-01-01

    This study addresses the possible impacts of energy and climate policies, namely corporate average fleet efficiency (CAFE) standard, renewable fuel standard (RFS) and clean energy standard (CES), and an economy wide equivalent carbon tax on GHG emissions in the US to the year 2045. Bottom–up and top–down modeling approaches find widespread use in energy economic modeling and policy analysis, in which they differ mainly with respect to the emphasis placed on technology of the energy system and/or the comprehensiveness of endogenous market adjustments. For this study, we use a hybrid energy modeling approach, MARKAL–Macro, that combines the characteristics of two divergent approaches, in order to investigate and quantify the cost of climate policies for the US and an equivalent carbon tax. The approach incorporates Macro-economic feedbacks through a single sector neoclassical growth model while maintaining sectoral and technological detail of the bottom–up optimization framework with endogenous aggregated energy demand. Our analysis is done for two important objectives of the US energy policy: GHG reduction and increased energy security. Our results suggest that the emission tax achieves results quite similar to the CES policy but very different results in the transportation sector. The CAFE standard and RFS are more expensive than a carbon tax for emission reductions. However, the CAFE standard and RFS are much more efficient at achieving crude oil import reductions. The GDP losses are 2.0% and 1.2% relative to the base case for the policy case and carbon tax. That difference may be perceived as being small given the increased energy security gained from the CAFE and RFS policy measures and the uncertainty inherent in this type of analysis. - Highlights: • Evaluates US impacts of three energy/climate policies and a carbon tax (CT) • Analysis done with bottom–up MARKAL model coupled with a macro model • Electricity clean energy standard very close to

  1. An Approach to Enforcing Clark-Wilson Model in Role-based Access Control Model

    Institute of Scientific and Technical Information of China (English)

    LIANGBin; SHIWenchang; SUNYufang; SUNBo

    2004-01-01

    Using one security model to enforce another is a prospective solution to multi-policy support. In this paper, an approach to the enforcing Clark-Wilson data integrity model in the Role-based access control (RBAC) model is proposed. An enforcement construction with great feasibility is presented. In this construction, a direct way to enforce the Clark-Wilson model is provided, the corresponding relations among users, transformation procedures, and constrained data items are strengthened; the concepts of task and subtask are introduced to enhance the support to least-privilege. The proposed approach widens the applicability of RBAC. The theoretical foundation for adopting Clark-Wilson model in a RBAC system with small cost is offered to meet the requirements of multi-policy support and policy flexibility.

  2. Similarity maps and hierarchical clustering for annotating FT-IR spectral images.

    Science.gov (United States)

    Zhong, Qiaoyong; Yang, Chen; Großerüschkamp, Frederik; Kallenbach-Thieltges, Angela; Serocka, Peter; Gerwert, Klaus; Mosig, Axel

    2013-11-20

    Unsupervised segmentation of multi-spectral images plays an important role in annotating infrared microscopic images and is an essential step in label-free spectral histopathology. In this context, diverse clustering approaches have been utilized and evaluated in order to achieve segmentations of Fourier Transform Infrared (FT-IR) microscopic images that agree with histopathological characterization. We introduce so-called interactive similarity maps as an alternative annotation strategy for annotating infrared microscopic images. We demonstrate that segmentations obtained from interactive similarity maps lead to similarly accurate segmentations as segmentations obtained from conventionally used hierarchical clustering approaches. In order to perform this comparison on quantitative grounds, we provide a scheme that allows to identify non-horizontal cuts in dendrograms. This yields a validation scheme for hierarchical clustering approaches commonly used in infrared microscopy. We demonstrate that interactive similarity maps may identify more accurate segmentations than hierarchical clustering based approaches, and thus are a viable and due to their interactive nature attractive alternative to hierarchical clustering. Our validation scheme furthermore shows that performance of hierarchical two-means is comparable to the traditionally used Ward's clustering. As the former is much more efficient in time and memory, our results suggest another less resource demanding alternative for annotating large spectral images.

  3. Numerical modelling of diesel spray using the Eulerian multiphase approach

    International Nuclear Information System (INIS)

    Vujanović, Milan; Petranović, Zvonimir; Edelbauer, Wilfried; Baleta, Jakov; Duić, Neven

    2015-01-01

    Highlights: • Numerical model for fuel disintegration was presented. • Fuel liquid and vapour were calculated. • Good agreement with experimental data was shown for various combinations of injection and chamber pressure. - Abstract: This research investigates high pressure diesel fuel injection into the combustion chamber by performing computational simulations using the Euler–Eulerian multiphase approach. Six diesel-like conditions were simulated for which the liquid fuel jet was injected into a pressurised inert environment (100% N 2 ) through a 205 μm nozzle hole. The analysis was focused on the liquid jet and vapour penetration, describing spatial and temporal spray evolution. For this purpose, an Eulerian multiphase model was implemented, variations of the sub-model coefficients were performed, and their impact on the spray formation was investigated. The final set of sub-model coefficients was applied to all operating points. Several simulations of high pressure diesel injections (50, 80, and 120 MPa) combined with different chamber pressures (5.4 and 7.2 MPa) were carried out and results were compared to the experimental data. The predicted results share a similar spray cloud shape for all conditions with the different vapour and liquid penetration length. The liquid penetration is shortened with the increase in chamber pressure, whilst the vapour penetration is more pronounced by elevating the injection pressure. Finally, the results showed good agreement when compared to the measured data, and yielded the correct trends for both the liquid and vapour penetrations under different operating conditions

  4. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Klos, Richard

    2008-03-15

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  5. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    International Nuclear Information System (INIS)

    Klos, Richard

    2008-03-01

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  6. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  7. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  8. Breastfeeding support for adolescent mothers: similarities and differences in the approach of midwives and qualified breastfeeding supporters

    Directory of Open Access Journals (Sweden)

    Burt Susan

    2006-11-01

    Full Text Available Abstract Background The protection, promotion and support of breastfeeding are now major public health priorities. It is well established that skilled support, voluntary or professional, proactively offered to women who want to breastfeed, can increase the initiation and/or duration of breastfeeding. Low levels of breastfeeding uptake and continuation amongst adolescent mothers in industrialised countries suggest that this is a group that is in particular need of breastfeeding support. Using qualitative methods, the present study aimed to investigate the similarities and differences in the approaches of midwives and qualified breastfeeding supporters (the Breastfeeding Network (BfN in supporting breastfeeding adolescent mothers. Methods The study was conducted in the North West of England between September 2001 and October 2002. The supportive approaches of 12 midwives and 12 BfN supporters were evaluated using vignettes, short descriptions of an event designed to obtain specific information from participants about their knowledge, perceptions and attitudes to a particular situation. Responses to vignettes were analysed using thematic networks analysis, involving the extraction of basic themes by analysing each script line by line. The basic themes were then grouped to form organising themes and finally central global themes. Discussion and consensus was reached related to the systematic development of the three levels of theme. Results Five components of support were identified: emotional, esteem, instrumental, informational and network support. Whilst the supportive approaches of both groups incorporated elements of each of the five components of support, BfN supporters placed greater emphasis upon providing emotional and esteem support and highlighted the need to elicit the mothers' existing knowledge, checking understanding through use of open questions and utilising more tentative language. Midwives were more directive and gave more

  9. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body

  10. From animal models to human disease: a genetic approach for personalized medicine in ALS.

    Science.gov (United States)

    Picher-Martel, Vincent; Valdmanis, Paul N; Gould, Peter V; Julien, Jean-Pierre; Dupré, Nicolas

    2016-07-11

    Amyotrophic Lateral Sclerosis (ALS) is the most frequent motor neuron disease in adults. Classical ALS is characterized by the death of upper and lower motor neurons leading to progressive paralysis. Approximately 10 % of ALS patients have familial form of the disease. Numerous different gene mutations have been found in familial cases of ALS, such as mutations in superoxide dismutase 1 (SOD1), TAR DNA-binding protein 43 (TDP-43), fused in sarcoma (FUS), C9ORF72, ubiquilin-2 (UBQLN2), optineurin (OPTN) and others. Multiple animal models were generated to mimic the disease and to test future treatments. However, no animal model fully replicates the spectrum of phenotypes in the human disease and it is difficult to assess how a therapeutic effect in disease models can predict efficacy in humans. Importantly, the genetic and phenotypic heterogeneity of ALS leads to a variety of responses to similar treatment regimens. From this has emerged the concept of personalized medicine (PM), which is a medical scheme that combines study of genetic, environmental and clinical diagnostic testing, including biomarkers, to individualized patient care. In this perspective, we used subgroups of specific ALS-linked gene mutations to go through existing animal models and to provide a comprehensive profile of the differences and similarities between animal models of disease and human disease. Finally, we reviewed application of biomarkers and gene therapies relevant in personalized medicine approach. For instance, this includes viral delivering of antisense oligonucleotide and small interfering RNA in SOD1, TDP-43 and C9orf72 mice models. Promising gene therapies raised possibilities for treating differently the major mutations in familial ALS cases.

  11. Assessing the similarity of mental models of operating room team members and implications for patient safety: a prospective, replicated study.

    Science.gov (United States)

    Nakarada-Kordic, Ivana; Weller, Jennifer M; Webster, Craig S; Cumin, David; Frampton, Christopher; Boyd, Matt; Merry, Alan F

    2016-08-31

    Patient safety depends on effective teamwork. The similarity of team members' mental models - or their shared understanding-regarding clinical tasks is likely to influence the effectiveness of teamwork. Mental models have not been measured in the complex, high-acuity environment of the operating room (OR), where professionals of different backgrounds must work together to achieve the best surgical outcome for each patient. Therefore, we aimed to explore the similarity of mental models of task sequence and of responsibility for task within multidisciplinary OR teams. We developed a computer-based card sorting tool (Momento) to capture the information on mental models in 20 six-person surgical teams, each comprised of three subteams (anaesthesia, surgery, and nursing) for two simulated laparotomies. Team members sorted 20 cards depicting key tasks according to when in the procedure each task should be performed, and which subteam was primarily responsible for each task. Within each OR team and subteam, we conducted pairwise comparisons of scores to arrive at mean similarity scores for each task. Mean similarity score for task sequence was 87 % (range 57-97 %). Mean score for responsibility for task was 70 % (range = 38-100 %), but for half of the tasks was only 51 % (range = 38-69 %). Participants believed their own subteam was primarily responsible for approximately half the tasks in each procedure. We found differences in the mental models of some OR team members about responsibility for and order of certain tasks in an emergency laparotomy. Momento is a tool that could help elucidate and better align the mental models of OR team members about surgical procedures and thereby improve teamwork and outcomes for patients.

  12. Size distribution of dust grains: A problem of self-similarity

    International Nuclear Information System (INIS)

    Henning, TH.; Dorschner, J.; Guertler, J.

    1989-01-01

    Distribution functions describing the results of natural processes frequently show the shape of power laws. It is an open question whether this behavior is a result simply coming about by the chosen mathematical representation of the observational data or reflects a deep-seated principle of nature. The authors suppose the latter being the case. Using a dust model consisting of silicate and graphite grains Mathis et al. (1977) showed that the interstellar extinction curve can be represented by taking a grain radii distribution of power law type n(a) varies as a(exp -p) with 3.3 less than or equal to p less than or equal to 3.6 (example 1) as a basis. A different approach to understanding power laws like that in example 1 becomes possible by the theory of self-similar processes (scale invariance). The beta model of turbulence (Frisch et al., 1978) leads in an elementary way to the concept of the self-similarity dimension D, a special case of Mandelbrot's (1977) fractal dimension. In the frame of this beta model, it is supposed that on each stage of a cascade the system decays to N clumps and that only the portion beta N remains active further on. An important feature of this model is that the active eddies become less and less space-filling. In the following, the authors assume that grain-grain collisions are such a scale-invarient process and that the remaining grains are the inactive (frozen) clumps of the cascade. In this way, a size distribution n(a) da varies as a(exp -(D+1))da (example 2) results. It seems to be highly probable that the power law character of the size distribution of interstellar dust grains is the result of a self-similarity process. We can, however, not exclude that the process leading to the interstellar grain size distribution is not fragmentation at all

  13. Function Modelling Of The Market And Assessing The Degree Of Similarity Between Real Properties - Dependent Or Independent Procedures In The Process Of Office Property Valuation

    Directory of Open Access Journals (Sweden)

    Barańska Anna

    2015-09-01

    Full Text Available Referring to the developed and presented in previous publications (e.g. Barańska 2011 two-stage algorithm for real estate valuation, this article addresses the problem of the relationship between the two stages of the algorithm. An essential part of the first stage is the multi-dimensional function modelling of the real estate market. As a result of selecting the model best fitted to the market data, in which the dependent variable is always the price of a real property, a set of market attributes is obtained, which in this model are considered to be price-determining. In the second stage, from the collection of real estate which served as a database in the process of estimating model parameters, the selected objects are those which are most similar to the one subject to valuation and form the basis for predicting the final value of the property being valued. Assessing the degree of similarity between real properties can be carried out based on the full spectrum of real estate attributes that potentially affect their value and which it is possible to gather information about, or only on the basis of those attributes which were considered to be price-determining in function modelling. It can also be performed by various methods. This article has examined the effect of various approaches on the final value of the property obtained using the two-stage prediction. In order fulfill the study aim precisely as possible, the results of each calculation step of the algorithm have been investigated in detail. Each of them points to the independence of the two procedures.

  14. A systemic approach to modelling of radiobiological effects

    International Nuclear Information System (INIS)

    Obaturov, G.M.

    1988-01-01

    Basic principles of the systemic approach to modelling of the radiobiological effects at different levels of cell organization have been formulated. The methodology is proposed for theoretical modelling of the effects at these levels

  15. Block generators for the similarity renormalization group

    Energy Technology Data Exchange (ETDEWEB)

    Huether, Thomas; Roth, Robert [TU Darmstadt (Germany)

    2016-07-01

    The Similarity Renormalization Group (SRG) is a powerful tool to improve convergence behavior of many-body calculations using NN and 3N interactions from chiral effective field theory. The SRG method decouples high and low-energy physics, through a continuous unitary transformation implemented via a flow equation approach. The flow is determined by a generator of choice. This generator governs the decoupling pattern and, thus, the improvement of convergence, but it also induces many-body interactions. Through the design of the generator we can optimize the balance between convergence and induced forces. We explore a new class of block generators that restrict the decoupling to the high-energy sector and leave the diagonalization in the low-energy sector to the many-body method. In this way one expects a suppression of induced forces. We analyze the induced many-body forces and the convergence behavior in light and medium-mass nuclei in No-Core Shell Model and In-Medium SRG calculations.

  16. Making the most of what we have: application of extrapolation approaches in wildlife transfer models

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, Nicholas A.; Barnett, Catherine L.; Wells, Claire [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi [Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Brown, Justin E.; Hosseini, Ali [Norwegian Radiation Protection Authority, P.O. Box 55, N-1332 Oesteraas (Norway); Yankovich, Tamara L. [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria); Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Willey, Neil [Centre for Research in Biosciences, University of the West of England, Coldharbour Lane, Frenchay, Bristol BS16 1QY (United Kingdom)

    2014-07-01

    Radiological environmental protection models need to predict the transfer of many radionuclides to a large number of organisms. There has been considerable development of transfer (predominantly concentration ratio) databases over the last decade. However, in reality it is unlikely we will ever have empirical data for all the species-radionuclide combinations which may need to be included in assessments. To provide default values for a number of existing models/frameworks various extrapolation approaches have been suggested (e.g. using data for a similar organism or element). This paper presents recent developments in two such extrapolation approaches, namely phylogeny and allometry. An evaluation of how extrapolation approaches have performed and the potential application of Bayesian statistics to make best use of available data will also be given. Using a Residual Maximum Likelihood (REML) mixed-model regression we initially analysed a dataset comprising 597 entries for 53 freshwater fish species from 67 sites to investigate if phylogenetic variation in transfer could be identified. The REML analysis generated an estimated mean value for each species on a common scale after taking account of the effect of the inter-site variation. Using an independent dataset, we tested the hypothesis that the REML model outputs could be used to predict radionuclide activity concentrations in other species from the results of a species which had been sampled at a specific site. The outputs of the REML analysis accurately predicted {sup 137}Cs activity concentrations in different species of fish from 27 lakes. Although initially investigated as an extrapolation approach the output of this work is a potential alternative to the highly site dependent concentration ratio model. We are currently applying this approach to a wider range of organism types and different ecosystems. An initial analysis of these results will be presented. The application of allometric, or mass

  17. Initial virtual flight test for a dynamically similar aircraft model with control augmentation system

    Directory of Open Access Journals (Sweden)

    Linliang Guo

    2017-04-01

    Full Text Available To satisfy the validation requirements of flight control law for advanced aircraft, a wind tunnel based virtual flight testing has been implemented in a low speed wind tunnel. A 3-degree-of-freedom gimbal, ventrally installed in the model, was used in conjunction with an actively controlled dynamically similar model of aircraft, which was equipped with the inertial measurement unit, attitude and heading reference system, embedded computer and servo-actuators. The model, which could be rotated around its center of gravity freely by the aerodynamic moments, together with the flow field, operator and real time control system made up the closed-loop testing circuit. The model is statically unstable in longitudinal direction, and it can fly stably in wind tunnel with the function of control augmentation of the flight control laws. The experimental results indicate that the model responds well to the operator’s instructions. The response of the model in the tests shows reasonable agreement with the simulation results. The difference of response of angle of attack is less than 0.5°. The effect of stability augmentation and attitude control law was validated in the test, meanwhile the feasibility of virtual flight test technique treated as preliminary evaluation tool for advanced flight vehicle configuration research was also verified.

  18. Algorithmic prediction of inter-song similarity in Western popular music

    NARCIS (Netherlands)

    Novello, A.; Par, van de S.L.J.D.E.; McKinney, M.F.; Kohlrausch, A.G.

    2013-01-01

    We investigate a method for automatic extraction of inter-song similarity for songs selected from several genres of Western popular music. The specific purpose of this approach is to evaluate the predictive power of different feature extraction sets based on human perception of music similarity and

  19. Dynamics and control of quadcopter using linear model predictive control approach

    Science.gov (United States)

    Islam, M.; Okasha, M.; Idres, M. M.

    2017-12-01

    This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.

  20. Model-centric approaches for the development of health information systems.

    Science.gov (United States)

    Tuomainen, Mika; Mykkänen, Juha; Luostarinen, Heli; Pöyhölä, Assi; Paakkanen, Esa

    2007-01-01

    Modeling is used increasingly in healthcare to increase shared knowledge, to improve the processes, and to document the requirements of the solutions related to health information systems (HIS). There are numerous modeling approaches which aim to support these aims, but a careful assessment of their strengths, weaknesses and deficiencies is needed. In this paper, we compare three model-centric approaches in the context of HIS development: the Model-Driven Architecture, Business Process Modeling with BPMN and BPEL and the HL7 Development Framework. The comparison reveals that all these approaches are viable candidates for the development of HIS. However, they have distinct strengths and abstraction levels, they require local and project-specific adaptation and offer varying levels of automation. In addition, illustration of the solutions to the end users must be improved.

  1. Risk Modelling for Passages in Approach Channel

    Directory of Open Access Journals (Sweden)

    Leszek Smolarek

    2013-01-01

    Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.

  2. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  3. Structure modulates similarity-based interference in sluicing: An eye tracking study.

    Directory of Open Access Journals (Sweden)

    Jesse A. Harris

    2015-12-01

    Full Text Available In cue-based content-addressable approaches to memory, a target and its competitors are retrieved in parallel from memory via a fast, associative cue-matching procedure under a severely limited focus of attention. Such a parallel matching procedure could in principle ignore the serial order or hierarchical structure characteristic of linguistic relations. I present an eye tracking while reading experiment that investigates whether the sentential position of a potential antecedent modulates the strength of similarity-based interference, a well-studied effect in which increased similarity in features between a target and its competitors results in slower and less accurate retrieval overall. The manipulation trades on an independently established Locality bias in sluiced structures to associate a wh-remnant (which ones in clausal ellipsis with the most local correlate (some wines, as in The tourists enjoyed some wines, but I don’t know which ones. The findings generally support cue-based parsing models of sentence processing that are subject to similarity-based interference in retrieval, and provide additional support to the growing body of evidence that retrieval is sensitive to both the structural position of a target antecedent and its competitors, and the specificity of retrieval cues.

  4. Testing Self-Similarity Through Lamperti Transformations

    KAUST Repository

    Lee, Myoungji

    2016-07-14

    Self-similar processes have been widely used in modeling real-world phenomena occurring in environmetrics, network traffic, image processing, and stock pricing, to name but a few. The estimation of the degree of self-similarity has been studied extensively, while statistical tests for self-similarity are scarce and limited to processes indexed in one dimension. This paper proposes a statistical hypothesis test procedure for self-similarity of a stochastic process indexed in one dimension and multi-self-similarity for a random field indexed in higher dimensions. If self-similarity is not rejected, our test provides a set of estimated self-similarity indexes. The key is to test stationarity of the inverse Lamperti transformations of the process. The inverse Lamperti transformation of a self-similar process is a strongly stationary process, revealing a theoretical connection between the two processes. To demonstrate the capability of our test, we test self-similarity of fractional Brownian motions and sheets, their time deformations and mixtures with Gaussian white noise, and the generalized Cauchy family. We also apply the self-similarity test to real data: annual minimum water levels of the Nile River, network traffic records, and surface heights of food wrappings. © 2016, International Biometric Society.

  5. A Measure of Similarity Between Trajectories of Vessels

    Directory of Open Access Journals (Sweden)

    Le QI

    2016-03-01

    Full Text Available The measurement of similarity between trajectories of vessels is one of the kernel problems that must be addressed to promote the development of maritime intelligent traffic system (ITS. In this study, a new model of trajectory similarity measurement was established to improve the data processing efficiency in dynamic application and to reflect actual sailing behaviors of vessels. In this model, a feature point detection algorithm was proposed to extract feature points, reduce data storage space and save computational resources. A new synthesized distance algorithm was also created to measure the similarity between trajectories by using the extracted feature points. An experiment was conducted to measure the similarity between the real trajectories of vessels. The growth of these trajectories required measurements to be conducted under different voyages. The results show that the similarity measurement between the vessel trajectories is efficient and correct. Comparison of the synthesized distance with the sailing behaviors of vessels proves that results are consistent with actual situations. The experiment results demonstrate the promising application of the proposed model in studying vessel traffic and in supplying reliable data for the development of maritime ITS.

  6. Forecasting conditional climate-change using a hybrid approach

    Science.gov (United States)

    Esfahani, Akbar Akbari; Friedel, Michael J.

    2014-01-01

    A novel approach is proposed to forecast the likelihood of climate-change across spatial landscape gradients. This hybrid approach involves reconstructing past precipitation and temperature using the self-organizing map technique; determining quantile trends in the climate-change variables by quantile regression modeling; and computing conditional forecasts of climate-change variables based on self-similarity in quantile trends using the fractionally differenced auto-regressive integrated moving average technique. The proposed modeling approach is applied to states (Arizona, California, Colorado, Nevada, New Mexico, and Utah) in the southwestern U.S., where conditional forecasts of climate-change variables are evaluated against recent (2012) observations, evaluated at a future time period (2030), and evaluated as future trends (2009–2059). These results have broad economic, political, and social implications because they quantify uncertainty in climate-change forecasts affecting various sectors of society. Another benefit of the proposed hybrid approach is that it can be extended to any spatiotemporal scale providing self-similarity exists.

  7. Qualitative modelling for the Caeté Mangrove Estuary (North Brazil): a preliminary approach to an integrated eco-social analysis

    Science.gov (United States)

    Ortiz, Marco; Wolff, Matthias

    2004-10-01

    The sustainability of different integrated management regimes for the mangrove ecosystem of the Caeté Estuary (North Brazil) were assessed using a holistic theoretical framework. As a way to demonstrate that the behaviour and trajectory of complex whole systems are not epiphenomenal to the properties of the small parts, a set of conceptual models from more reductionistic to more holistic were enunciated. These models integrate the scientific information published until present for this mangrove ecosystem. The sustainability of different management scenarios (forestry and fishery) was assessed. Since the exploitation of mangrove trees is not allowed according Brazilian laws, the forestry was only included for simulation purposes. The model simulations revealed that sustainability predictions of reductionistic models should not be extrapolated into holistic approaches. Forestry and fishery activities seem to be sustainable only if they are self-damped. The exploitation of the two mangrove species Rhizophora mangle and Avicenia germinans does not appear to be sustainable, thus a rotation harvest is recommended. A similar conclusion holds for the exploitation of invertebrate species. Our results suggest that more studies should be focused on the estimation of maximum sustainable yield based on a multispecies approach. Any reference to holistic sustainability based on reductionistic approaches may distort our understanding of the natural complex ecosystems.

  8. Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2016-08-01

    Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.

  9. Identifying mechanistic similarities in drug responses

    KAUST Repository

    Zhao, C.

    2012-05-15

    Motivation: In early drug development, it would be beneficial to be able to identify those dynamic patterns of gene response that indicate that drugs targeting a particular gene will be likely or not to elicit the desired response. One approach would be to quantitate the degree of similarity between the responses that cells show when exposed to drugs, so that consistencies in the regulation of cellular response processes that produce success or failure can be more readily identified.Results: We track drug response using fluorescent proteins as transcription activity reporters. Our basic assumption is that drugs inducing very similar alteration in transcriptional regulation will produce similar temporal trajectories on many of the reporter proteins and hence be identified as having similarities in their mechanisms of action (MOA). The main body of this work is devoted to characterizing similarity in temporal trajectories/signals. To do so, we must first identify the key points that determine mechanistic similarity between two drug responses. Directly comparing points on the two signals is unrealistic, as it cannot handle delays and speed variations on the time axis. Hence, to capture the similarities between reporter responses, we develop an alignment algorithm that is robust to noise, time delays and is able to find all the contiguous parts of signals centered about a core alignment (reflecting a core mechanism in drug response). Applying the proposed algorithm to a range of real drug experiments shows that the result agrees well with the prior drug MOA knowledge. © The Author 2012. Published by Oxford University Press. All rights reserved.

  10. Hypercompetitive Environments: An Agent-based model approach

    Science.gov (United States)

    Dias, Manuel; Araújo, Tanya

    Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.

  11. Selling addictions: Similarities in approaches between Selling addictions: Similarities in approaches between

    Directory of Open Access Journals (Sweden)

    Laura Bond

    2010-06-01

    Full Text Available The findings of this study have implications for advancing public health measures for the control of alcohol by confirming the parallels between tobacco and alcohol industry operations and strategies to delay public health advances.

  12. Selling addictions: Similarities in approaches between Selling addictions: Similarities in approaches between

    OpenAIRE

    Laura Bond

    2010-01-01

    The findings of this study have implications for advancing public health measures for the control of alcohol by confirming the parallels between tobacco and alcohol industry operations and strategies to delay public health advances.

  13. Comparison of Two Grid Refinement Approaches for High Resolution Regional Climate Modeling: MPAS vs WRF

    Science.gov (United States)

    Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.

    2012-12-01

    This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.

  14. A Review of Accident Modelling Approaches for Complex Critical Sociotechnical Systems

    National Research Council Canada - National Science Library

    Qureshi, Zahid H

    2008-01-01

    .... This report provides a review of key traditional accident modelling approaches and their limitations, and describes new system-theoretic approaches to the modelling and analysis of accidents in safety-critical systems...

  15. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  16. State impulsive control strategies for a two-languages competitive model with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Nie, Lin-Fei; Teng, Zhi-Dong; Nieto, Juan J.; Jung, Il Hyo

    2015-07-01

    For reasons of preserving endangered languages, we propose, in this paper, a novel two-languages competitive model with bilingualism and interlinguistic similarity, where state-dependent impulsive control strategies are introduced. The novel control model includes two control threshold values, which are different from the previous state-dependent impulsive differential equations. By using qualitative analysis method, we obtain that the control model exhibits two stable positive order-1 periodic solutions under some general conditions. Moreover, numerical simulations clearly illustrate the main theoretical results and feasibility of state-dependent impulsive control strategies. Meanwhile numerical simulations also show that state-dependent impulsive control strategy can be applied to other general two-languages competitive model and obtain the desired result. The results indicate that the fractions of two competitive languages can be kept within a reasonable level under almost any circumstances. Theoretical basis for finding a new control measure to protect the endangered language is offered.

  17. Automated pattern analysis in gesture research : similarity measuring in 3D motion capture models of communicative action

    NARCIS (Netherlands)

    Schueller, D.; Beecks, C.; Hassani, M.; Hinnell, J.; Brenger, B.; Seidl, T.; Mittelberg, I.

    2017-01-01

    The question of how to model similarity between gestures plays an important role in current studies in the domain of human communication. Most research into recurrent patterns in co-verbal gestures – manual communicative movements emerging spontaneously during conversation – is driven by qualitative

  18. Numeric, Agent-based or System dynamics model? Which modeling approach is the best for vast population simulation?

    Science.gov (United States)

    Cimler, Richard; Tomaskova, Hana; Kuhnova, Jitka; Dolezal, Ondrej; Pscheidl, Pavel; Kuca, Kamil

    2018-02-01

    Alzheimer's disease is one of the most common mental illnesses. It is posited that more than 25 % of the population is affected by some mental disease during their lifetime. Treatment of each patient draws resources from the economy concerned. Therefore, it is important to quantify the potential economic impact. Agent-based, system dynamics and numerical approaches to dynamic modeling of the population of the European Union and its patients with Alzheimer's disease are presented in this article. Simulations, their characteristics, and the results from different modeling tools are compared. The results of these approaches are compared with EU population growth predictions from the statistical office of the EU by Eurostat. The methodology of a creation of the models is described and all three modeling approaches are compared. The suitability of each modeling approach for the population modeling is discussed. In this case study, all three approaches gave us the results corresponding with the EU population prediction. Moreover, we were able to predict the number of patients with AD and, based on the modeling method, we were also able to monitor different characteristics of the population. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. Calibrating a multi-model approach to defect production in high energy collision cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.; Singh, B.N.; Diaz de la Rubia, T.

    1994-01-01

    A multi-model approach to simulating defect production processes at the atomic scale is described that incorporates molecular dynamics (MD), binary collision approximation (BCA) calculations and stochastic annealing simulations. The central hypothesis is that the simple, fast computer codes capable of simulating large numbers of high energy cascades (e.g., BCA codes) can be made to yield the correct defect configurations when their parameters are calibrated using the results of the more physically realistic MD simulations. The calibration procedure is investigated using results of MD simulations of 25 keV cascades in copper. The configurations of point defects are extracted from the MD cascade simulations at the end of the collisional phase, thus providing information similar to that obtained with a binary collision model. The MD collisional phase defect configurations are used as input to the ALSOME annealing simulation code, and values of the ALSOME quenching parameters are determined that yield the best fit to the post-quenching defect configurations of the MD simulations. ((orig.))

  20. Simple Heuristic Approach to Introduction of the Black-Scholes Model

    Science.gov (United States)

    Yalamova, Rossitsa

    2010-01-01

    A heuristic approach to explaining of the Black-Scholes option pricing model in undergraduate classes is described. The approach draws upon the method of protocol analysis to encourage students to "think aloud" so that their mental models can be surfaced. It also relies upon extensive visualizations to communicate relationships that are…

  1. A Knowledge Model Sharing Based Approach to Privacy-Preserving Data Mining

    OpenAIRE

    Hongwei Tian; Weining Zhang; Shouhuai Xu; Patrick Sharkey

    2012-01-01

    Privacy-preserving data mining (PPDM) is an important problem and is currently studied in three approaches: the cryptographic approach, the data publishing, and the model publishing. However, each of these approaches has some problems. The cryptographic approach does not protect privacy of learned knowledge models and may have performance and scalability issues. The data publishing, although is popular, may suffer from too much utility loss for certain types of data mining applications. The m...

  2. A new approach to Naturalness in SUSY models

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    We review recent results that provide a new approach to the old problem of naturalness in supersymmetric models, without relying on subjective definitions for the fine-tuning associated with {\\it fixing} the EW scale (to its measured value) in the presence of quantum corrections. The approach can address in a model-independent way many questions related to this problem. The results show that naturalness and its measure (fine-tuning) are an intrinsic part of the likelihood to fit the data that {\\it includes} the EW scale. One important consequence is that the additional {\\it constraint} of fixing the EW scale, usually not imposed in the data fits of the models, impacts on their overall likelihood to fit the data (or chi^2/ndf, ndf: number of degrees of freedom). This has negative implications for the viability of currently popular supersymmetric extensions of the Standard Model.

  3. Similarity analyses of chromatographic herbal fingerprints: a review.

    Science.gov (United States)

    Goodarzi, Mohammad; Russell, Paul J; Vander Heyden, Yvan

    2013-12-04

    Herbal medicines are becoming again more popular in the developed countries because being "natural" and people thus often assume that they are inherently safe. Herbs have also been used worldwide for many centuries in the traditional medicines. The concern of their safety and efficacy has grown since increasing western interest. Herbal materials and their extracts are very complex, often including hundreds of compounds. A thorough understanding of their chemical composition is essential for conducting a safety risk assessment. However, herbal material can show considerable variability. The chemical constituents and their amounts in a herb can be different, due to growing conditions, such as climate and soil, the drying process, the harvest season, etc. Among the analytical methods, chromatographic fingerprinting has been recommended as a potential and reliable methodology for the identification and quality control of herbal medicines. Identification is needed to avoid fraud and adulteration. Currently, analyzing chromatographic herbal fingerprint data sets has become one of the most applied tools in quality assessment of herbal materials. Mostly, the entire chromatographic profiles are used to identify or to evaluate the quality of the herbs investigated. Occasionally only a limited number of compounds are considered. One approach to the safety risk assessment is to determine whether the herbal material is substantially equivalent to that which is either readily consumed in the diet, has a history of application or has earlier been commercialized i.e. to what is considered as reference material. In order to help determining substantial equivalence using fingerprint approaches, a quantitative measurement of similarity is required. In this paper, different (dis)similarity approaches, such as (dis)similarity metrics or exploratory analysis approaches applied on herbal medicinal fingerprints, are discussed and illustrated with several case studies. Copyright © 2013

  4. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  5. An integrated modeling approach to age invariant face recognition

    Science.gov (United States)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  6. Variational approach to chiral quark models

    Energy Technology Data Exchange (ETDEWEB)

    Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira

    1987-03-01

    A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation.

  7. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  8. Learning the Task Management Space of an Aircraft Approach Model

    Science.gov (United States)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  9. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...... of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented...

  10. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  11. Personalized Mortality Prediction Driven by Electronic Medical Data and a Patient Similarity Metric

    Science.gov (United States)

    Lee, Joon; Maslove, David M.; Dubin, Joel A.

    2015-01-01

    Background Clinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1) to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2) to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made. Methods and Findings We deployed a cosine-similarity-based patient similarity metric (PSM) to an intensive care unit (ICU) database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care. Conclusions The present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR) systems, our

  12. Intercomparison of hydrological model structures and calibration approaches in climate scenario impact projections

    Science.gov (United States)

    Vansteenkiste, Thomas; Tavakoli, Mohsen; Ntegeka, Victor; De Smedt, Florimond; Batelaan, Okke; Pereira, Fernando; Willems, Patrick

    2014-11-01

    The objective of this paper is to investigate the effects of hydrological model structure and calibration on climate change impact results in hydrology. The uncertainty in the hydrological impact results is assessed by the relative change in runoff volumes and peak and low flow extremes from historical and future climate conditions. The effect of the hydrological model structure is examined through the use of five hydrological models with different spatial resolutions and process descriptions. These were applied to a medium sized catchment in Belgium. The models vary from the lumped conceptual NAM, PDM and VHM models over the intermediate detailed and distributed WetSpa model to the fully distributed MIKE SHE model. The latter model accounts for the 3D groundwater processes and interacts bi-directionally with a full hydrodynamic MIKE 11 river model. After careful and manual calibration of these models, accounting for the accuracy of the peak and low flow extremes and runoff subflows, and the changes in these extremes for changing rainfall conditions, the five models respond in a similar way to the climate scenarios over Belgium. Future projections on peak flows are highly uncertain with expected increases as well as decreases depending on the climate scenario. The projections on future low flows are more uniform; low flows decrease (up to 60%) for all models and for all climate scenarios. However, the uncertainties in the impact projections are high, mainly in the dry season. With respect to the model structural uncertainty, the PDM model simulates significantly higher runoff peak flows under future wet scenarios, which is explained by its specific model structure. For the low flow extremes, the MIKE SHE model projects significantly lower low flows in dry scenario conditions in comparison to the other models, probably due to its large difference in process descriptions for the groundwater component, the groundwater-river interactions. The effect of the model

  13. Modelling efficient innovative work: integration of economic and social psychological approaches

    Directory of Open Access Journals (Sweden)

    Babanova Yulia

    2017-01-01

    Full Text Available The article deals with the relevance of integration of economic and social psychological approaches to the solution of enhancing the efficiency of innovation management. The content, features and specifics of the modelling methods within each of approaches are unfolded and options of integration are considered. The economic approach lies in the generation of the integrated matrix concept of management of innovative development of an enterprise in line with the stages of innovative work and the use of the integrated vector method for the evaluation of the innovative enterprise development level. The social psychological approach lies in the development of a system of psychodiagnostic indexes of activity resources within the scope of psychological innovative audit of enterprise management and development of modelling methods for the balance of activity trends. Modelling the activity resources is based on the system of equations accounting for the interaction type of psychodiagnostic indexes. Integration of two approaches includes a methodological level, a level of empirical studies and modelling methods. There are suggested options of integrating the economic and psychological approaches to analyze available material and non-material resources of the enterprises’ innovative work and to forecast an optimal option of development based on the implemented modelling methods.

  14. Mechanics of ultra-stretchable self-similar serpentine interconnects

    International Nuclear Information System (INIS)

    Zhang, Yihui; Fu, Haoran; Su, Yewang; Xu, Sheng

    2013-01-01

    Graphical abstract: We developed analytical models of flexibility and elastic-stretchability for self-similar interconnect. The analytic solutions agree very well with the finite element analyses, both demonstrating that the elastic-stretchability more than doubles when the order of self-similar structure increases by one. Design optimization yields 90% and 50% elastic stretchability for systems with surface filling ratios of 50% and 70% of active devices, respectively. The analytic models are useful for the development of stretchable electronics that simultaneously demand large coverage of active devices, such as stretchable photovoltaics and electronic eye-ball cameras. -- Abstract: Electrical interconnects that adopt self-similar, serpentine layouts offer exceptional levels of stretchability in systems that consist of collections of small, non-stretchable active devices in the so-called island–bridge design. This paper develops analytical models of flexibility and elastic stretchability for such structures, and establishes recursive formulae at different orders of self-similarity. The analytic solutions agree well with finite element analysis, with both demonstrating that the elastic stretchability more than doubles when the order of the self-similar structure increases by one. Design optimization yields 90% and 50% elastic stretchability for systems with surface filling ratios of 50% and 70% of active devices, respectively

  15. Applications of Bayesian approach in modelling risk of malaria-related hospital mortality

    Directory of Open Access Journals (Sweden)

    Simbeye Jupiter S

    2008-02-01

    Full Text Available Abstract Background Malaria is a major public health problem in Malawi, however, quantifying its burden in a population is a challenge. Routine hospital data provide a proxy for measuring the incidence of severe malaria and for crudely estimating morbidity rates. Using such data, this paper proposes a method to describe trends, patterns and factors associated with in-hospital mortality attributed to the disease. Methods We develop semiparametric regression models which allow joint analysis of nonlinear effects of calendar time and continuous covariates, spatially structured variation, unstructured heterogeneity, and other fixed covariates. Modelling and inference use the fully Bayesian approach via Markov Chain Monte Carlo (MCMC simulation techniques. The methodology is applied to analyse data arising from paediatric wards in Zomba district, Malawi, between 2002 and 2003. Results and Conclusion We observe that the risk of dying in hospital is lower in the dry season, and for children who travel a distance of less than 5 kms to the hospital, but increases for those who are referred to the hospital. The results also indicate significant differences in both structured and unstructured spatial effects, and the health facility effects reveal considerable differences by type of facility or practice. More importantly, our approach shows non-linearities in the effect of metrical covariates on the probability of dying in hospital. The study emphasizes that the methodological framework used provides a useful tool for analysing the data at hand and of similar structure.

  16. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  17. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  18. The role of visual similarity and memory in body model distortions.

    Science.gov (United States)

    Saulton, Aurelie; Longo, Matthew R; Wong, Hong Yu; Bülthoff, Heinrich H; de la Rosa, Stephan

    2016-02-01

    Several studies have shown that the perception of one's own hand size is distorted in proprioceptive localization tasks. It has been suggested that those distortions mirror somatosensory anisotropies. Recent research suggests that non-corporeal items also show some spatial distortions. In order to investigate the psychological processes underlying the localization task, we investigated the influences of visual similarity and memory on distortions observed on corporeal and non-corporeal items. In experiment 1, participants indicated the location of landmarks on: their own hand, a rubber hand (rated as most similar to the real hand), and a rake (rated as least similar to the real hand). Results show no significant differences between rake and rubber hand distortions but both items were significantly less distorted than the hand. Experiments 2 and 3 explored the role of memory in spatial distance judgments of the hand, the rake and the rubber hand. Spatial representations of items measured in experiments 2 and 3 were also distorted but showed the tendency to be smaller than in localization tasks. While memory and visual similarity seem to contribute to explain qualitative similarities in distortions between the hand and non-corporeal items, those factors cannot explain the larger magnitude observed in hand distortions. Copyright © 2015. Published by Elsevier B.V.

  19. Distinguishing Features and Similarities Between Descriptive Phenomenological and Qualitative Description Research.

    Science.gov (United States)

    Willis, Danny G; Sullivan-Bolyai, Susan; Knafl, Kathleen; Cohen, Marlene Z

    2016-09-01

    Scholars who research phenomena of concern to the discipline of nursing are challenged with making wise choices about different qualitative research approaches. Ultimately, they want to choose an approach that is best suited to answer their research questions. Such choices are predicated on having made distinctions between qualitative methodology, methods, and analytic frames. In this article, we distinguish two qualitative research approaches widely used for descriptive studies: descriptive phenomenological and qualitative description. Providing a clear basis that highlights the distinguishing features and similarities between descriptive phenomenological and qualitative description research will help students and researchers make more informed choices in deciding upon the most appropriate methodology in qualitative research. We orient the reader to distinguishing features and similarities associated with each approach and the kinds of research questions descriptive phenomenological and qualitative description research address. © The Author(s) 2016.

  20. Quasirelativistic quark model in quasipotential approach

    CERN Document Server

    Matveev, V A; Savrin, V I; Sissakian, A N

    2002-01-01

    The relativistic particles interaction is described within the frames of quasipotential approach. The presentation is based on the so called covariant simultaneous formulation of the quantum field theory, where by the theory is considered on the spatial-like three-dimensional hypersurface in the Minkowski space. Special attention is paid to the methods of plotting various quasipotentials as well as to the applications of the quasipotential approach to describing the characteristics of the relativistic particles interaction in the quark models, namely: the hadrons elastic scattering amplitudes, the mass spectra and widths mesons decays, the cross sections of the deep inelastic leptons scattering on the hadrons

  1. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  2. Spherically symmetric self-similar universe

    Energy Technology Data Exchange (ETDEWEB)

    Dyer, C C [Toronto Univ., Ontario (Canada)

    1979-10-01

    A spherically symmetric self-similar dust-filled universe is considered as a simple model of a hierarchical universe. Observable differences between the model in parabolic expansion and the corresponding homogeneous Einstein-de Sitter model are considered in detail. It is found that an observer at the centre of the distribution has a maximum observable redshift and can in principle see arbitrarily large blueshifts. It is found to yield an observed density-distance law different from that suggested by the observations of de Vaucouleurs. The use of these solutions as central objects for Swiss-cheese vacuoles is discussed.

  3. Risk-based technical specifications: Development and application of an approach to the generation of a plant specific real-time risk model

    International Nuclear Information System (INIS)

    Puglia, B.; Gallagher, D.; Amico, P.; Atefi, B.

    1992-10-01

    This report describes a process developed to convert an existing PRA into a model amenable to real time, risk-based technical specification calculations. In earlier studies (culminating in NUREG/CR-5742), several risk-based approaches to technical specification were evaluated. A real-time approach using a plant specific PRA capable of modeling plant configurations as they change was identified as the most comprehensive approach to control plant risk. A master fault tree logic model representative of-all of the core damage sequences was developed. Portions of the system fault trees were modularized and supercomponents comprised of component failures with similar effects were developed to reduce the size of the model and, quantification times. Modifications to the master fault tree logic were made to properly model the effect of maintenance and recovery actions. Fault trees representing several actuation systems not modeled in detail in the existing PRA were added to the master fault tree logic. This process was applied to the Surry NUREG-1150 Level 1 PRA. The master logic mode was confirmed. The model was then used to evaluate frequency associated with several plant configurations using the IRRAS code. For all cases analyzed computational time was less than three minutes. This document Volume 2, contains appendices A, B, and C. These provide, respectively: Surry Technical Specifications Model Database, Surry Technical Specifications Model, and a list of supercomponents used in the Surry Technical Specifications Model

  4. General relativistic self-similar waves that induce an anomalous acceleration into the standard model of cosmology

    CERN Document Server

    Smoller, Joel

    2012-01-01

    We prove that the Einstein equations in Standard Schwarzschild Coordinates close to form a system of three ordinary differential equations for a family of spherically symmetric, self-similar expansion waves, and the critical ($k=0$) Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology (FRW), is embedded as a single point in this family. Removing a scaling law and imposing regularity at the center, we prove that the family reduces to an implicitly defined one parameter family of distinct spacetimes determined by the value of a new {\\it acceleration parameter} $a$, such that $a=1$ corresponds to FRW. We prove that all self-similar spacetimes in the family are distinct from the non-critical $k\

  5. A model-based prognostic approach to predict interconnect failure using impedance analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Dae Il; Yoon, Jeong Ah [Dept. of System Design and Control Engineering. Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2016-10-15

    The reliability of electronic assemblies is largely affected by the health of interconnects, such as solder joints, which provide mechanical, electrical and thermal connections between circuit components. During field lifecycle conditions, interconnects are often subjected to a DC open circuit, one of the most common interconnect failure modes, due to cracking. An interconnect damaged by cracking is sometimes extremely hard to detect when it is a part of a daisy-chain structure, neighboring with other healthy interconnects that have not yet cracked. This cracked interconnect may seem to provide a good electrical contact due to the compressive load applied by the neighboring healthy interconnects, but it can cause the occasional loss of electrical continuity under operational and environmental loading conditions in field applications. Thus, cracked interconnects can lead to the intermittent failure of electronic assemblies and eventually to permanent failure of the product or the system. This paper introduces a model-based prognostic approach to quantitatively detect and predict interconnect failure using impedance analysis and particle filtering. Impedance analysis was previously reported as a sensitive means of detecting incipient changes at the surface of interconnects, such as cracking, based on the continuous monitoring of RF impedance. To predict the time to failure, particle filtering was used as a prognostic approach using the Paris model to address the fatigue crack growth. To validate this approach, mechanical fatigue tests were conducted with continuous monitoring of RF impedance while degrading the solder joints under test due to fatigue cracking. The test results showed the RF impedance consistently increased as the solder joints were degraded due to the growth of cracks, and particle filtering predicted the time to failure of the interconnects similarly to their actual timesto- failure based on the early sensitivity of RF impedance.

  6. A novel approach of modeling continuous dark hydrogen fermentation.

    Science.gov (United States)

    Alexandropoulou, Maria; Antonopoulou, Georgia; Lyberatos, Gerasimos

    2018-02-01

    In this study a novel modeling approach for describing fermentative hydrogen production in a continuous stirred tank reactor (CSTR) was developed, using the Aquasim modeling platform. This model accounts for the key metabolic reactions taking place in a fermentative hydrogen producing reactor, using fixed stoichiometry but different reaction rates. Biomass yields are determined based on bioenergetics. The model is capable of describing very well the variation in the distribution of metabolic products for a wide range of hydraulic retention times (HRT). The modeling approach is demonstrated using the experimental data obtained from a CSTR, fed with food industry waste (FIW), operating at different HRTs. The kinetic parameters were estimated through fitting to the experimental results. Hydrogen and total biogas production rates were predicted very well by the model, validating the basic assumptions regarding the implicated stoichiometric biochemical reactions and their kinetic rates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. An interdisciplinary approach for earthquake modelling and forecasting

    Science.gov (United States)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  8. A review of analogue modelling of geodynamic processes: Approaches, scaling, materials and quantification, with an application to subduction experiments

    Science.gov (United States)

    Schellart, Wouter P.; Strak, Vincent

    2016-10-01

    . In the external approach, all deformation in the system is driven by the externally imposed condition, while in the combined approach, part of the deformation is driven by buoyancy forces internal to the system. In the internal approach, all deformation is driven by buoyancy forces internal to the system and so the system is closed and no energy is added during an experimental run. In the combined approach, the externally imposed force or added energy is generally not quantified nor compared to the internal buoyancy force or potential energy of the system, and so it is not known if these experiments are properly scaled with respect to nature. The scaling theory requires that analogue models are geometrically, kinematically and dynamically similar to the natural prototype. Direct scaling of topography in laboratory models indicates that it is often significantly exaggerated. This can be ascribed to (1) The lack of isostatic compensation, which causes topography to be too high. (2) The lack of erosion, which causes topography to be too high. (3) The incorrect scaling of topography when density contrasts are scaled (rather than densities); In isostatically supported models, scaling of density contrasts requires an adjustment of the scaled topography by applying a topographic correction factor. (4) The incorrect scaling of externally imposed boundary conditions in isostatically supported experiments using the combined approach; When externally imposed forces are too high, this creates topography that is too high. Other processes that also affect surface topography in laboratory models but not in nature (or only in a negligible way) include surface tension (for models using fluids) and shear zone dilatation (for models using granular material), but these will generally only affect the model surface topography on relatively short horizontal length scales of the order of several mm across material boundaries and shear zones, respectively.

  9. Constructing a justice model based on Sen's capability approach

    OpenAIRE

    Yüksel, Sevgi; Yuksel, Sevgi

    2008-01-01

    The thesis provides a possible justice model based on Sen's capability approach. For this goal, we first analyze the general structure of a theory of justice, identifying the main variables and issues. Furthermore, based on Sen (2006) and Kolm (1998), we look at 'transcendental' and 'comparative' approaches to justice and concentrate on the sufficiency condition for the comparative approach. Then, taking Rawls' theory of justice as a starting point, we present how Sen's capability approach em...

  10. Biotic interactions in the face of climate change: a comparison of three modelling approaches.

    Directory of Open Access Journals (Sweden)

    Anja Jaeschke

    Full Text Available Climate change is expected to alter biotic interactions, and may lead to temporal and spatial mismatches of interacting species. Although the importance of interactions for climate change risk assessments is increasingly acknowledged in observational and experimental studies, biotic interactions are still rarely incorporated in species distribution models. We assessed the potential impacts of climate change on the obligate interaction between Aeshna viridis and its egg-laying plant Stratiotes aloides in Europe, based on an ensemble modelling technique. We compared three different approaches for incorporating biotic interactions in distribution models: (1 We separately modelled each species based on climatic information, and intersected the future range overlap ('overlap approach'. (2 We modelled the potential future distribution of A. viridis with the projected occurrence probability of S. aloides as further predictor in addition to climate ('explanatory variable approach'. (3 We calibrated the model of A. viridis in the current range of S. aloides and multiplied the future occurrence probabilities of both species ('reference area approach'. Subsequently, all approaches were compared to a single species model of A. viridis without interactions. All approaches projected a range expansion for A. viridis. Model performance on test data and amount of range gain differed depending on the biotic interaction approach. All interaction approaches yielded lower range gains (up to 667% lower than the model without interaction. Regarding the contribution of algorithm and approach to the overall uncertainty, the main part of explained variation stems from the modelling algorithm, and only a small part is attributed to the modelling approach. The comparison of the no-interaction model with the three interaction approaches emphasizes the importance of including obligate biotic interactions in projective species distribution modelling. We recommend the use of

  11. Top-down approach to unified supergravity models

    International Nuclear Information System (INIS)

    Hempfling, R.

    1994-03-01

    We introduce a new approach for studying unified supergravity models. In this approach all the parameters of the grand unified theory (GUT) are fixed by imposing the corresponding number of low energy observables. This determines the remaining particle spectrum whose dependence on the low energy observables can now be investigated. We also include some SUSY threshold corrections that have previously been neglected. In particular the SUSY threshold corrections to the fermion masses can have a significant impact on the Yukawa coupling unification. (orig.)

  12. Self-similar Langmuir collapse at critical dimension

    International Nuclear Information System (INIS)

    Berge, L.; Dousseau, Ph.; Pelletier, G.; Pesme, D.

    1991-01-01

    Two spherically symmetric versions of a self-similar collapse are investigated within the framework of the Zakharov equations, namely, one relative to a vectorial electric field and the other corresponding to a scalar modeling of the Langmuir field. Singular solutions of both of them depend on a linear time contraction rate ξ(t) = V(t * -t), where t * and V = -ξ denote, respectively, the collapse time and the constant collapse velocity. It is shown that under certain conditions, only the scalar model admits self-similar solutions, varying regularly as a function of the control parameter V from the subsonic (V >1) regime. (author)

  13. A robust Bayesian approach to modeling epistemic uncertainty in common-cause failure models

    International Nuclear Information System (INIS)

    Troffaes, Matthias C.M.; Walter, Gero; Kelly, Dana

    2014-01-01

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model

  14. Dynamics based alignment of proteins: an alternative approach to quantify dynamic similarity

    Directory of Open Access Journals (Sweden)

    Lyngsø Rune

    2010-04-01

    Full Text Available Abstract Background The dynamic motions of many proteins are central to their function. It therefore follows that the dynamic requirements of a protein are evolutionary constrained. In order to assess and quantify this, one needs to compare the dynamic motions of different proteins. Comparing the dynamics of distinct proteins may also provide insight into how protein motions are modified by variations in sequence and, consequently, by structure. The optimal way of comparing complex molecular motions is, however, far from trivial. The majority of comparative molecular dynamics studies performed to date relied upon prior sequence or structural alignment to define which residues were equivalent in 3-dimensional space. Results Here we discuss an alternative methodology for comparative molecular dynamics that does not require any prior alignment information. We show it is possible to align proteins based solely on their dynamics and that we can use these dynamics-based alignments to quantify the dynamic similarity of proteins. Our method was tested on 10 representative members of the PDZ domain family. Conclusions As a result of creating pair-wise dynamics-based alignments of PDZ domains, we have found evolutionarily conserved patterns in their backbone dynamics. The dynamic similarity of PDZ domains is highly correlated with their structural similarity as calculated with Dali. However, significant differences in their dynamics can be detected indicating that sequence has a more refined role to play in protein dynamics than just dictating the overall fold. We suggest that the method should be generally applicable.

  15. Unraveling the Mechanisms of Manual Therapy: Modeling an Approach.

    Science.gov (United States)

    Bialosky, Joel E; Beneciuk, Jason M; Bishop, Mark D; Coronado, Rogelio A; Penza, Charles W; Simon, Corey B; George, Steven Z

    2018-01-01

    Synopsis Manual therapy interventions are popular among individual health care providers and their patients; however, systematic reviews do not strongly support their effectiveness. Small treatment effect sizes of manual therapy interventions may result from a "one-size-fits-all" approach to treatment. Mechanistic-based treatment approaches to manual therapy offer an intriguing alternative for identifying patients likely to respond to manual therapy. However, the current lack of knowledge of the mechanisms through which manual therapy interventions inhibit pain limits such an approach. The nature of manual therapy interventions further confounds such an approach, as the related mechanisms are likely a complex interaction of factors related to the patient, the provider, and the environment in which the intervention occurs. Therefore, a model to guide both study design and the interpretation of findings is necessary. We have previously proposed a model suggesting that the mechanical force from a manual therapy intervention results in systemic neurophysiological responses leading to pain inhibition. In this clinical commentary, we provide a narrative appraisal of the model and recommendations to advance the study of manual therapy mechanisms. J Orthop Sports Phys Ther 2018;48(1):8-18. doi:10.2519/jospt.2018.7476.

  16. Accuracy test for link prediction in terms of similarity index: The case of WS and BA models

    Science.gov (United States)

    Ahn, Min-Woo; Jung, Woo-Sung

    2015-07-01

    Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.

  17. A Model-Driven Approach to e-Course Management

    Science.gov (United States)

    Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana

    2018-01-01

    This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…

  18. Modelling the Heat Consumption in District Heating Systems using a Grey-box approach

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Madsen, Henrik

    2006-01-01

    identification of an overall model structure followed by data-based modelling, whereby the details of the model are identified. This approach is sometimes called grey-box modelling, but the specific approach used here does not require states to be specified. Overall, the paper demonstrates the power of the grey......-box approach. (c) 2005 Elsevier B.V. All rights reserved....

  19. Different relationships between temporal phylogenetic turnover and phylogenetic similarity and in two forests were detected by a new null model.

    Science.gov (United States)

    Huang, Jian-Xiong; Zhang, Jian; Shen, Yong; Lian, Ju-yu; Cao, Hong-lin; Ye, Wan-hui; Wu, Lin-fang; Bin, Yue

    2014-01-01

    Ecologists have been monitoring community dynamics with the purpose of understanding the rates and causes of community change. However, there is a lack of monitoring of community dynamics from the perspective of phylogeny. We attempted to understand temporal phylogenetic turnover in a 50 ha tropical forest (Barro Colorado Island, BCI) and a 20 ha subtropical forest (Dinghushan in southern China, DHS). To obtain temporal phylogenetic turnover under random conditions, two null models were used. The first shuffled names of species that are widely used in community phylogenetic analyses. The second simulated demographic processes with careful consideration on the variation in dispersal ability among species and the variations in mortality both among species and among size classes. With the two models, we tested the relationships between temporal phylogenetic turnover and phylogenetic similarity at different spatial scales in the two forests. Results were more consistent with previous findings using the second null model suggesting that the second null model is more appropriate for our purposes. With the second null model, a significantly positive relationship was detected between phylogenetic turnover and phylogenetic similarity in BCI at a 10 m×10 m scale, potentially indicating phylogenetic density dependence. This relationship in DHS was significantly negative at three of five spatial scales. This could indicate abiotic filtering processes for community assembly. Using variation partitioning, we found phylogenetic similarity contributed to variation in temporal phylogenetic turnover in the DHS plot but not in BCI plot. The mechanisms for community assembly in BCI and DHS vary from phylogenetic perspective. Only the second null model detected this difference indicating the importance of choosing a proper null model.

  20. Modeling energy fluxes in heterogeneous landscapes employing a mosaic approach

    Science.gov (United States)

    Klein, Christian; Thieme, Christoph; Priesack, Eckart

    2015-04-01

    Recent studies show that uncertainties in regional and global climate and weather simulations are partly due to inadequate descriptions of the energy flux exchanges between the land surface and the atmosphere. One major shortcoming is the limitation of the grid-cell resolution, which is recommended to be about at least 3x3 km² in most models due to limitations in the model physics. To represent each individual grid cell most models select one dominant soil type and one dominant land use type. This resolution, however, is often too coarse in regions where the spatial diversity of soil and land use types are high, e.g. in Central Europe. An elegant method to avoid the shortcoming of grid cell resolution is the so called mosaic approach. This approach is part of the recently developed ecosystem model framework Expert-N 5.0. The aim of this study was to analyze the impact of the characteristics of two managed fields, planted with winter wheat and potato, on the near surface soil moistures and on the near surface energy flux exchanges of the soil-plant-atmosphere interface. The simulated energy fluxes were compared with eddy flux tower measurements between the respective fields at the research farm Scheyern, North-West of Munich, Germany. To perform these simulations, we coupled the ecosystem model Expert-N 5.0 to an analytical footprint model. The coupled model system has the ability to calculate the mixing ratio of the surface energy fluxes at a given point within one grid cell (in this case at the flux tower between the two fields). This approach accounts for the differences of the two soil types, of land use managements, and of canopy properties due to footprint size dynamics. Our preliminary simulation results show that a mosaic approach can improve modeling and analyzing energy fluxes when the land surface is heterogeneous. In this case our applied method is a promising approach to extend weather and climate models on the regional and on the global scale.

  1. On self-similarity of crack layer

    Science.gov (United States)

    Botsis, J.; Kunin, B.

    1987-01-01

    The crack layer (CL) theory of Chudnovsky (1986), based on principles of thermodynamics of irreversible processes, employs a crucial hypothesis of self-similarity. The self-similarity hypothesis states that the value of the damage density at a point x of the active zone at a time t coincides with that at the corresponding point in the initial (t = 0) configuration of the active zone, the correspondence being given by a time-dependent affine transformation of the space variables. In this paper, the implications of the self-similarity hypothesis for qusi-static CL propagation is investigated using polystyrene as a model material and examining the evolution of damage distribution along the trailing edge which is approximated by a straight segment perpendicular to the crack path. The results support the self-similarity hypothesis adopted by the CL theory.

  2. Similarities between obesity in pets and children: the addiction model.

    Science.gov (United States)

    Pretlow, Robert A; Corbee, Ronald J

    2016-09-01

    Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest short-term but poor long-term results. New strategies are needed. A novel theory posits that obesity in pets and children is due to 'treats' and excessive meal amounts given by the 'pet-parent' and child-parent to obtain affection from the pet/child, which enables 'eating addiction' in the pet/child and results in parental 'co-dependence'. Pet-parents and child-parents may even become hostage to the treats/food to avoid the ire of the pet/child. Eating addiction in the pet/child also may be brought about by emotional factors such as stress, independent of parental co-dependence. An applicable treatment for child obesity has been trialled using classic addiction withdrawal/abstinence techniques, as well as behavioural addiction methods, with significant results. Both the child and the parent progress through withdrawal from specific 'problem foods', next from snacking (non-specific foods) and finally from excessive portions at meals (gradual reductions). This approach should adapt well for pets and pet-parents. Pet obesity is more 'pure' than child obesity, in that contributing factors and treatment points are essentially under the control of the pet-parent. Pet obesity might thus serve as an ideal test bed for the treatment and prevention of child obesity, with focus primarily on parental behaviours. Sharing information between the fields of pet and child obesity would be mutually beneficial.

  3. Semantic similarity from natural language and ontology analysis

    CERN Document Server

    Harispe, Sébastien; Janaqi, Stefan

    2015-01-01

    Artificial Intelligence federates numerous scientific fields in the aim of developing machines able to assist human operators performing complex treatments---most of which demand high cognitive skills (e.g. learning or decision processes). Central to this quest is to give machines the ability to estimate the likeness or similarity between things in the way human beings estimate the similarity between stimuli.In this context, this book focuses on semantic measures: approaches designed for comparing semantic entities such as units of language, e.g. words, sentences, or concepts and instances def

  4. Designing water demand management schemes using a socio-technical modelling approach.

    Science.gov (United States)

    Baki, Sotiria; Rozos, Evangelos; Makropoulos, Christos

    2018-05-01

    Although it is now widely acknowledged that urban water systems (UWSs) are complex socio-technical systems and that a shift towards a socio-technical approach is critical in achieving sustainable urban water management, still, more often than not, UWSs are designed using a segmented modelling approach. As such, either the analysis focuses on the description of the purely technical sub-system, without explicitly taking into account the system's dynamic socio-economic processes, or a more interdisciplinary approach is followed, but delivered through relatively coarse models, which often fail to provide a thorough representation of the urban water cycle and hence cannot deliver accurate estimations of the hydrosystem's responses. In this work we propose an integrated modelling approach for the study of the complete socio-technical UWS that also takes into account socio-economic and climatic variability. We have developed an integrated model, which is used to investigate the diffusion of household water conservation technologies and its effects on the UWS, under different socio-economic and climatic scenarios. The integrated model is formed by coupling a System Dynamics model that simulates the water technology adoption process, and the Urban Water Optioneering Tool (UWOT) for the detailed simulation of the urban water cycle. The model and approach are tested and demonstrated in an urban redevelopment area in Athens, Greece under different socio-economic scenarios and policy interventions. It is suggested that the proposed approach can establish quantifiable links between socio-economic change and UWS responses and therefore assist decision makers in designing more effective and resilient long-term strategies for water conservation. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. An analytic solution of a model of language competition with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Otero-Espinar, M. V.; Seoane, L. F.; Nieto, J. J.; Mira, J.

    2013-12-01

    An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also suggest ways to further improve the model and facilitate the comparison of its consequences with those from other approaches or with real data.

  6. Rio De Janeiro and Medellin: Similar Challenges, Different Approaches

    Science.gov (United States)

    2016-03-01

    Model rejects traditional policing relationship between police and citizens to ensure that crime 233... pedagogical strategies to encourage citizen participation, a culture of respect for life, legality, self-regulation, matters pertaining to the Government, and

  7. The Importance of Being Hybrid for Spatial Epidemic Models:A Multi-Scale Approach

    Directory of Open Access Journals (Sweden)

    Arnaud Banos

    2015-11-01

    Full Text Available This work addresses the spread of a disease within an urban system, definedas a network of interconnected cities. The first step consists of comparing two differentapproaches: a macroscopic one, based on a system of coupled Ordinary DifferentialEquations (ODE Susceptible-Infected-Recovered (SIR systems exploiting populations onnodes and flows on edges (so-called metapopulational model, and a hybrid one, couplingODE SIR systems on nodes and agents traveling on edges. Under homogeneous conditions(mean field approximation, this comparison leads to similar results on the outputs on whichwe focus (the maximum intensity of the epidemic, its duration and the time of the epidemicpeak. However, when it comes to setting up epidemic control strategies, results rapidlydiverge between the two approaches, and it appears that the full macroscopic model is notcompletely adapted to these questions. In this paper, we focus on some control strategies,which are quarantine, avoidance and risk culture, to explore the differences, advantages anddisadvantages of the two models and discuss the importance of being hybrid when modelingand simulating epidemic spread at the level of a whole urban system.

  8. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  9. Modular Modelling and Simulation Approach - Applied to Refrigeration Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær; Stoustrup, Jakob

    2008-01-01

    This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...

  10. Comparison of two novel approaches to model fibre reinforced concrete

    NARCIS (Netherlands)

    Radtke, F.K.F.; Simone, A.; Sluys, L.J.

    2009-01-01

    We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity

  11. Merits of a Scenario Approach in Dredge Plume Modelling

    DEFF Research Database (Denmark)

    Pedersen, Claus; Chu, Amy Ling Chu; Hjelmager Jensen, Jacob

    2011-01-01

    Dredge plume modelling is a key tool for quantification of potential impacts to inform the EIA process. There are, however, significant uncertainties associated with the modelling at the EIA stage when both dredging methodology and schedule are likely to be a guess at best as the dredging...... contractor would rarely have been appointed. Simulation of a few variations of an assumed full dredge period programme will generally not provide a good representation of the overall environmental risks associated with the programme. An alternative dredge plume modelling strategy that attempts to encapsulate...... uncertainties associated with preliminary dredging programmes by using a scenario-based modelling approach is presented. The approach establishes a set of representative and conservative scenarios for key factors controlling the spill and plume dispersion and simulates all combinations of e.g. dredge, climatic...

  12. Testing statistical self-similarity in the topology of river networks

    Science.gov (United States)

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2010-01-01

    Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.

  13. An approach to multiscale modelling with graph grammars.

    Science.gov (United States)

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  14. Contrasting lexical similarity and formal definitions in SNOMED CT: consistency and implications.

    Science.gov (United States)

    Agrawal, Ankur; Elhanan, Gai

    2014-02-01

    To quantify the presence of and evaluate an approach for detection of inconsistencies in the formal definitions of SNOMED CT (SCT) concepts utilizing a lexical method. Utilizing SCT's Procedure hierarchy, we algorithmically formulated similarity sets: groups of concepts with similar lexical structure of their fully specified name. We formulated five random samples, each with 50 similarity sets, based on the same parameter: number of parents, attributes, groups, all the former as well as a randomly selected control sample. All samples' sets were reviewed for types of formal definition inconsistencies: hierarchical, attribute assignment, attribute target values, groups, and definitional. For the Procedure hierarchy, 2111 similarity sets were formulated, covering 18.1% of eligible concepts. The evaluation revealed that 38 (Control) to 70% (Different relationships) of similarity sets within the samples exhibited significant inconsistencies. The rate of inconsistencies for the sample with different relationships was highly significant compared to Control, as well as the number of attribute assignment and hierarchical inconsistencies within their respective samples. While, at this time of the HITECH initiative, the formal definitions of SCT are only a minor consideration, in the grand scheme of sophisticated, meaningful use of captured clinical data, they are essential. However, significant portion of the concepts in the most semantically complex hierarchy of SCT, the Procedure hierarchy, are modeled inconsistently in a manner that affects their computability. Lexical methods can efficiently identify such inconsistencies and possibly allow for their algorithmic resolution. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. The Edit Distance as a Measure of Perceived Rhythmic Similarity

    Directory of Open Access Journals (Sweden)

    Olaf Post

    2012-07-01

    Full Text Available The ‘edit distance’ (or ‘Levenshtein distance’ measure of distance between two data sets is defined as the minimum number of editing operations – insertions, deletions, and substitutions – that are required to transform one data set to the other (Orpen and Huron, 1992. This measure of distance has been applied frequently and successfully in music information retrieval, but rarely in predicting human perception of distance. In this study, we investigate the effectiveness of the edit distance as a predictor of perceived rhythmic dissimilarity under simple rhythmic alterations. Approaching rhythms as a set of pulses that are either onsets or silences, we study two types of alterations. The first experiment is designed to test the model’s accuracy for rhythms that are relatively similar; whether rhythmic variations with the same edit distance to a source rhythm are also perceived as relatively similar by human subjects. In addition, we observe whether the salience of an edit operation is affected by its metric placement in the rhythm. Instead of using a rhythm that regularly subdivides a 4/4 meter, our source rhythm is a syncopated 16-pulse rhythm, the son. Results show a high correlation between the predictions by the edit distance model and human similarity judgments (r = 0.87; a higher correlation than for the well-known generative theory of tonal music (r = 0.64. In the second experiment, we seek to assess the accuracy of the edit distance model in predicting relatively dissimilar rhythms. The stimuli used are random permutations of the son’s inter-onset intervals: 3-3-4-2-4. The results again indicate that the edit distance correlates well with the perceived rhythmic dissimilarity judgments of the subjects (r = 0.76. To gain insight in the relationships between the individual rhythms, the results are also presented by means of graphic phylogenetic trees.

  16. A fuzzy approach for modelling radionuclide in lake system

    International Nuclear Information System (INIS)

    Desai, H.K.; Christian, R.A.; Banerjee, J.; Patra, A.K.

    2013-01-01

    Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of 3 H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict 3 H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and 3 H concentration at discharge point. The Output was 3 H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. -- Highlights: • Uncommon approach (Fuzzy Rule Base) of modelling radionuclide dispersion in Lake. • Predicts 3 H released from Kakrapar Atomic Power Station at a point of human exposure. • RMSE of fuzzy model is 1.95, which means, it has well imitated natural ecosystem

  17. Data Analysis A Model Comparison Approach, Second Edition

    CERN Document Server

    Judd, Charles M; Ryan, Carey S

    2008-01-01

    This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T

  18. Validation of an employee satisfaction model: A structural equation model approach

    OpenAIRE

    Ophillia Ledimo; Nico Martins

    2015-01-01

    The purpose of this study was to validate an employee satisfaction model and to determine the relationships between the different dimensions of the concept, using the structural equation modelling approach (SEM). A cross-sectional quantitative survey design was used to collect data from a random sample of (n=759) permanent employees of a parastatal organisation. Data was collected using the Employee Satisfaction Survey (ESS) to measure employee satisfaction dimensions. Following the steps of ...

  19. Data and Dynamics Driven Approaches for Modelling and Forecasting the Red Sea Chlorophyll

    KAUST Repository

    Dreano, Denis

    2017-01-01

    concentration and have practical applications for fisheries operation and harmful algae blooms monitoring. Modelling approaches can be divided between physics- driven (dynamical) approaches, and data-driven (statistical) approaches. Dynamical models are based

  20. High dimensions - a new approach to fermionic lattice models

    International Nuclear Information System (INIS)

    Vollhardt, D.

    1991-01-01

    The limit of high spatial dimensions d, which is well-established in the theory of classical and localized spin models, is shown to be a fruitful approach also to itinerant fermion systems, such as the Hubbard model and the periodic Anderson model. Many investigations which are probability difficult in finite dimensions, become tractable in d=∞. At the same time essential features of systems in d=3 and even lower dimensions are very well described by the results obtained in d=∞. A wide range of applications of this new concept (e.g., in perturbation theory, Fermi liquid theory, variational approaches, exact results, etc.) is discussed and the state-of-the-art is reviewed. (orig.)

  1. Economic evaluation of nivolumab for the treatment of second-line advanced squamous NSCLC in Canada: a comparison of modeling approaches to estimate and extrapolate survival outcomes.

    Science.gov (United States)

    Goeree, Ron; Villeneuve, Julie; Goeree, Jeff; Penrod, John R; Orsini, Lucinda; Tahami Monfared, Amir Abbas

    2016-06-01

    Background Lung cancer is the most common type of cancer in the world and is associated with significant mortality. Nivolumab demonstrated statistically significant improvements in progression-free survival (PFS) and overall survival (OS) for patients with advanced squamous non-small cell lung cancer (NSCLC) who were previously treated. The cost-effectiveness of nivolumab has not been assessed in Canada. A contentious component of projecting long-term cost and outcomes in cancer relates to the modeling approach adopted, with the two most common approaches being partitioned survival (PS) and Markov models. The objectives of this analysis were to estimate the cost-utility of nivolumab and to compare the results using these alternative modeling approaches. Methods Both PS and Markov models were developed using docetaxel and erlotinib as comparators. A three-health state model was used consisting of progression-free, progressed disease, and death. Disease progression and time to progression were estimated by identifying best-fitting survival curves from the clinical trial data for PFS and OS. Expected costs and health outcomes were calculated by combining health-state occupancy with medical resource use and quality-of-life assigned to each of the three health states. The health outcomes included in the model were survival and quality-adjusted-life-years (QALYs). Results Nivolumab was found to have the highest expected per-patient cost, but also improved per-patient life years (LYs) and QALYs. Nivolumab cost an additional $151,560 and $140,601 per QALY gained compared to docetaxel and erlotinib, respectively, using a PS model approach. The cost-utility estimates using a Markov model were very similar ($152,229 and $141,838, respectively, per QALY gained). Conclusions Nivolumab was found to involve a trade-off between improved patient survival and QALYs, and increased cost. It was found that the use of a PS or Markov model produced very similar estimates of expected cost

  2. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  3. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    Science.gov (United States)

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of

  4. Synthesis of industrial applications of local approach to fracture models

    International Nuclear Information System (INIS)

    Eripret, C.

    1993-03-01

    This report gathers different applications of local approach to fracture models to various industrial configurations, such as nuclear pressure vessel steel, cast duplex stainless steels, or primary circuit welds such as bimetallic welds. As soon as models are developed on the basis of microstructural observations, damage mechanisms analyses, and fracture process, the local approach to fracture proves to solve problems where classical fracture mechanics concepts fail. Therefore, local approach appears to be a powerful tool, which completes the standard fracture criteria used in nuclear industry by exhibiting where and why those classical concepts become unvalid. (author). 1 tab., 18 figs., 25 refs

  5. CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach

    International Nuclear Information System (INIS)

    Mimouni, S.; Mechitoua, N.; Foissac, A.; Hassanaly, M.; Ouraou, M.

    2011-01-01

    The present work is focused on the condensation heat transfer that plays a dominant role in many accident scenarios postulated to occur in the containment of nuclear reactors. The study compares a general multiphase approach implemented in NEPTUNE C FD with a homogeneous model, of widespread use for engineering studies, implemented in Code S aturne. The model implemented in NEPTUNE C FD assumes that liquid droplets form along the wall within nucleation sites. Vapor condensation on droplets makes them grow. Once the droplet diameter reaches a critical value, gravitational forces compensate surface tension force and then droplets slide over the wall and form a liquid film. This approach allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow and the condensation/evaporation phenomena on the walls. As concern the homogeneous approach, the motion of the liquid film due to the gravitational forces is neglected, as well as the volume occupied by the liquid. Both condensation models and compressible procedures are validated and compared to experimental data provided by the TOSQAN ISP47 experiment (IRSN Saclay). Computational results compare favorably with experimental data, particularly for the Helium and steam volume fractions.

  6. Application of declarative modeling approaches for external events

    International Nuclear Information System (INIS)

    Anoba, R.C.

    2005-01-01

    Probabilistic Safety Assessments (PSAs) are increasingly being used as a tool for supporting the acceptability of design, procurement, construction, operation, and maintenance activities at Nuclear Power Plants. Since the issuance of Generic Letter 88-20 and subsequent IPE/IPEEE assessments, the NRC has issued several Regulatory Guides such as RG 1.174 to describe the use of PSA in risk-informed regulation activities. Most PSA have the capability to address internal events including internal floods. As the more demands are being placed for using the PSA to support risk-informed applications, there has been a growing need to integrate other eternal events (Seismic, Fire, etc.) into the logic models. Most external events involve spatial dependencies and usually impact the logic models at the component level. Therefore, manual insertion of external events impacts into a complex integrated fault tree model may be too cumbersome for routine uses of the PSA. Within the past year, a declarative modeling approach has been developed to automate the injection of external events into the PSA. The intent of this paper is to introduce the concept of declarative modeling in the context of external event applications. A declarative modeling approach involves the definition of rules for injection of external event impacts into the fault tree logic. A software tool such as the EPRI's XInit program can be used to interpret the pre-defined rules and automatically inject external event elements into the PSA. The injection process can easily be repeated, as required, to address plant changes, sensitivity issues, changes in boundary conditions, etc. External event elements may include fire initiating events, seismic initiating events, seismic fragilities, fire-induced hot short events, special human failure events, etc. This approach has been applied at a number of US nuclear power plants including a nuclear power plant in Romania. (authors)

  7. Neural Global Pattern Similarity Underlies True and False Memories.

    Science.gov (United States)

    Ye, Zhifang; Zhu, Bi; Zhuang, Liping; Lu, Zhonglin; Chen, Chuansheng; Xue, Gui

    2016-06-22

    The neural processes giving rise to human memory strength signals remain poorly understood. Inspired by formal computational models that posit a central role of global matching in memory strength, we tested a novel hypothesis that the strengths of both true and false memories arise from the global similarity of an item's neural activation pattern during retrieval to that of all the studied items during encoding (i.e., the encoding-retrieval neural global pattern similarity [ER-nGPS]). We revealed multiple ER-nGPS signals that carried distinct information and contributed differentially to true and false memories: Whereas the ER-nGPS in the parietal regions reflected semantic similarity and was scaled with the recognition strengths of both true and false memories, ER-nGPS in the visual cortex contributed solely to true memory. Moreover, ER-nGPS differences between the parietal and visual cortices were correlated with frontal monitoring processes. By combining computational and neuroimaging approaches, our results advance a mechanistic understanding of memory strength in recognition. What neural processes give rise to memory strength signals, and lead to our conscious feelings of familiarity? Using fMRI, we found that the memory strength of a given item depends not only on how it was encoded during learning, but also on the similarity of its neural representation with other studied items. The global neural matching signal, mainly in the parietal lobule, could account for the memory strengths of both studied and unstudied items. Interestingly, a different global matching signal, originated from the visual cortex, could distinguish true from false memories. The findings reveal multiple neural mechanisms underlying the memory strengths of events registered in the brain. Copyright © 2016 the authors 0270-6474/16/366792-11$15.00/0.

  8. An electrophysiological signature of summed similarity in visual working memory

    NARCIS (Netherlands)

    Van Vugt, Marieke K.; Sekuler, Robert; Wilson, Hugh R.; Kahana, Michael J.

    Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory

  9. BioModels: expanding horizons to include more modelling approaches and formats.

    Science.gov (United States)

    Glont, Mihai; Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Malik-Sheriff, Rahuman S; Chelliah, Vijayalakshmi; Le Novère, Nicolas; Hermjakob, Henning

    2018-01-04

    BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Weighted similarity-based clustering of chemical structures and bioactivity data in early drug discovery.

    Science.gov (United States)

    Perualila-Tan, Nolen Joy; Shkedy, Ziv; Talloen, Willem; Göhlmann, Hinrich W H; Moerbeke, Marijke Van; Kasim, Adetayo

    2016-08-01

    The modern process of discovering candidate molecules in early drug discovery phase includes a wide range of approaches to extract vital information from the intersection of biology and chemistry. A typical strategy in compound selection involves compound clustering based on chemical similarity to obtain representative chemically diverse compounds (not incorporating potency information). In this paper, we propose an integrative clustering approach that makes use of both biological (compound efficacy) and chemical (structural features) data sources for the purpose of discovering a subset of compounds with aligned structural and biological properties. The datasets are integrated at the similarity level by assigning complementary weights to produce a weighted similarity matrix, serving as a generic input in any clustering algorithm. This new analysis work flow is semi-supervised method since, after the determination of clusters, a secondary analysis is performed wherein it finds differentially expressed genes associated to the derived integrated cluster(s) to further explain the compound-induced biological effects inside the cell. In this paper, datasets from two drug development oncology projects are used to illustrate the usefulness of the weighted similarity-based clustering approach to integrate multi-source high-dimensional information to aid drug discovery. Compounds that are structurally and biologically similar to the reference compounds are discovered using this proposed integrative approach.

  11. Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach.

    Science.gov (United States)

    Pasupa, Kitsuchart; Kudisthalert, Wasu

    2018-01-01

    Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets-Maximum Unbiased Validation Dataset-which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6.

  12. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  13. Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches

    Science.gov (United States)

    Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem

    2014-01-01

    Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…

  14. A little similarity goes a long way: the effects of peripheral but self-revealing similarities on improving and sustaining interracial relationships.

    Science.gov (United States)

    West, Tessa V; Magee, Joe C; Gordon, Sarah H; Gullett, Lindy

    2014-07-01

    Integrating theory on close relationships and intergroup relations, we construct a manipulation of similarity that we demonstrate can improve interracial interactions across different settings. We find that manipulating perceptions of similarity on self-revealing attributes that are peripheral to the interaction improves interactions in cross-race dyads and racially diverse task groups. In a getting-acquainted context, we demonstrate that the belief that one's different-race partner is similar to oneself on self-revealing, peripheral attributes leads to less anticipatory anxiety than the belief that one's partner is similar on peripheral, nonself-revealing attributes. In another dyadic context, we explore the range of benefits that perceptions of peripheral, self-revealing similarity can bring to different-race interaction partners and find (a) less anxiety during interaction, (b) greater interest in sustained contact with one's partner, and (c) stronger accuracy in perceptions of one's partners' relationship intentions. By contrast, participants in same-race interactions were largely unaffected by these manipulations of perceived similarity. Our final experiment shows that among small task groups composed of racially diverse individuals, those whose members perceive peripheral, self-revealing similarity perform superior to those who perceive dissimilarity. Implications for using this approach to improve interracial interactions across different goal-driven contexts are discussed.

  15. Integrating UML, the Q-model and a Multi-Agent Approach in Process Specifications and Behavioural Models of Organisations

    Directory of Open Access Journals (Sweden)

    Raul Savimaa

    2005-08-01

    Full Text Available Efficient estimation and representation of an organisation's behaviour requires specification of business processes and modelling of actors' behaviour. Therefore the existing classical approaches that concentrate only on planned processes are not suitable and an approach that integrates process specifications with behavioural models of actors should be used instead. The present research indicates that a suitable approach should be based on interactive computing. This paper examines the integration of UML diagrams for process specifications, the Q-model specifications for modelling timing criteria of existing and planned processes and a multi-agent approach for simulating non-deterministic behaviour of human actors in an organisation. The corresponding original methodology is introduced and some of its applications as case studies are reviewed.

  16. A rule-based approach to model checking of UML state machines

    Science.gov (United States)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  17. A comprehensive dynamic modeling approach for giant magnetostrictive material actuators

    International Nuclear Information System (INIS)

    Gu, Guo-Ying; Zhu, Li-Min; Li, Zhi; Su, Chun-Yi

    2013-01-01

    In this paper, a comprehensive modeling approach for a giant magnetostrictive material actuator (GMMA) is proposed based on the description of nonlinear electromagnetic behavior, the magnetostrictive effect and frequency response of the mechanical dynamics. It maps the relationships between current and magnetic flux at the electromagnetic part to force and displacement at the mechanical part in a lumped parameter form. Towards this modeling approach, the nonlinear hysteresis effect of the GMMA appearing only in the electrical part is separated from the linear dynamic plant in the mechanical part. Thus, a two-module dynamic model is developed to completely characterize the hysteresis nonlinearity and the dynamic behaviors of the GMMA. The first module is a static hysteresis model to describe the hysteresis nonlinearity, and the cascaded second module is a linear dynamic plant to represent the dynamic behavior. To validate the proposed dynamic model, an experimental platform is established. Then, the linear dynamic part and the nonlinear hysteresis part of the proposed model are identified in sequence. For the linear part, an approach based on axiomatic design theory is adopted. For the nonlinear part, a Prandtl–Ishlinskii model is introduced to describe the hysteresis nonlinearity and a constrained quadratic optimization method is utilized to identify its coefficients. Finally, experimental tests are conducted to demonstrate the effectiveness of the proposed dynamic model and the corresponding identification method. (paper)

  18. Bystander Approaches: Empowering Students to Model Ethical Sexual Behavior

    Science.gov (United States)

    Lynch, Annette; Fleming, Wm. Michael

    2005-01-01

    Sexual violence on college campuses is well documented. Prevention education has emerged as an alternative to victim-- and perpetrator--oriented approaches used in the past. One sexual violence prevention education approach focuses on educating and empowering the bystander to become a point of ethical intervention. In this model, bystanders to…

  19. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  20. Modeling the angular motion dynamics of spacecraft with a magnetic attitude control system based on experimental studies and dynamic similarity

    Science.gov (United States)

    Kulkov, V. M.; Medvedskii, A. L.; Terentyev, V. V.; Firsyuk, S. O.; Shemyakov, A. O.

    2017-12-01

    The problem of spacecraft attitude control using electromagnetic systems interacting with the Earth's magnetic field is considered. A set of dimensionless parameters has been formed to investigate the spacecraft orientation regimes based on dynamically similar models. The results of experimental studies of small spacecraft with a magnetic attitude control system can be extrapolated to the in-orbit spacecraft motion control regimes by using the methods of the dimensional and similarity theory.

  1. A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making

    Directory of Open Access Journals (Sweden)

    Sabine Prezenski

    2017-08-01

    Full Text Available Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying

  2. A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making.

    Science.gov (United States)

    Prezenski, Sabine; Brechmann, André; Wolff, Susann; Russwinkel, Nele

    2017-01-01

    Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying cognitive processes from

  3. A hybrid modeling approach for option pricing

    Science.gov (United States)

    Hajizadeh, Ehsan; Seifi, Abbas

    2011-11-01

    The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.

  4. Modelling of ductile and cleavage fracture by local approach

    International Nuclear Information System (INIS)

    Samal, M.K.; Dutta, B.K.; Kushwaha, H.S.

    2000-08-01

    This report describes the modelling of ductile and cleavage fracture processes by local approach. It is now well known that the conventional fracture mechanics method based on single parameter criteria is not adequate to model the fracture processes. It is because of the existence of effect of size and geometry of flaw, loading type and rate on the fracture resistance behaviour of any structure. Hence, it is questionable to use same fracture resistance curves as determined from standard tests in the analysis of real life components because of existence of all the above effects. So, there is need to have a method in which the parameters used for the analysis will be true material properties, i.e. independent of geometry and size. One of the solutions to the above problem is the use of local approaches. These approaches have been extensively studied and applied to different materials (including SA33 Gr.6) in this report. Each method has been studied and reported in a separate section. This report has been divided into five sections. Section-I gives a brief review of the fundamentals of fracture process. Section-II deals with modelling of ductile fracture by locally uncoupled type of models. In this section, the critical cavity growth parameters of the different models have been determined for the primary heat transport (PHT) piping material of Indian pressurised heavy water reactor (PHWR). A comparative study has been done among different models. The dependency of the critical parameters on stress triaxiality factor has also been studied. It is observed that Rice and Tracey's model is the most suitable one. But, its parameters are not fully independent of triaxiality factor. For this purpose, a modification to Rice and Tracery's model is suggested in Section-III. Section-IV deals with modelling of ductile fracture process by locally coupled type of models. Section-V deals with the modelling of cleavage fracture process by Beremins model, which is based on Weibulls

  5. Repetitive Identification of Structural Systems Using a Nonlinear Model Parameter Refinement Approach

    Directory of Open Access Journals (Sweden)

    Jeng-Wen Lin

    2009-01-01

    Full Text Available This paper proposes a statistical confidence interval based nonlinear model parameter refinement approach for the health monitoring of structural systems subjected to seismic excitations. The developed model refinement approach uses the 95% confidence interval of the estimated structural parameters to determine their statistical significance in a least-squares regression setting. When the parameters' confidence interval covers the zero value, it is statistically sustainable to truncate such parameters. The remaining parameters will repetitively undergo such parameter sifting process for model refinement until all the parameters' statistical significance cannot be further improved. This newly developed model refinement approach is implemented for the series models of multivariable polynomial expansions: the linear, the Taylor series, and the power series model, leading to a more accurate identification as well as a more controllable design for system vibration control. Because the statistical regression based model refinement approach is intrinsically used to process a “batch” of data and obtain an ensemble average estimation such as the structural stiffness, the Kalman filter and one of its extended versions is introduced to the refined power series model for structural health monitoring.

  6. Comparing Two Approaches for Engineering Education Development

    DEFF Research Database (Denmark)

    Edström, Kristina; Kolmos, Anette

    2012-01-01

    During the last decade there have been two dominating models for reforming engineering education: Problem/Project Based Learning (PBL) and the CDIO Initiative. The aim of this paper is to compare the PBL and CDIO approaches to engineering education reform, to identify and explain similarities...... and differences. CDIO and PBL will each be defined and compared in terms of the original need analysis, underlying educational philosophy and the essentials of the respective approaches to engineering education. In these respects we see many similarities. Circumstances that explain differences in history...... approaches have much in common and can be combined, and especially that the practitioners have much to learn from each other’s experiences through a dialogue between the communities. This structured comparison will potentially indicate specifically what an institution experienced in one of the communities...

  7. Phytomonas serpens: immunological similarities with the human trypanosomatid pathogens.

    Science.gov (United States)

    Santos, André L S; d'Avila-Levy, Claudia M; Elias, Camila G R; Vermelho, Alane B; Branquinha, Marta H

    2007-07-01

    The present review provides an overview of recent discoveries concerning the immunological similarities between Phytomonas serpens, a tomato parasite, and human trypanosomatid pathogens, with special emphasis on peptidases. Leishmania spp. and Trypanosoma cruzi express peptidases that are well-known virulence factors, named leishmanolysin and cruzipain. P. serpens synthesizes two distinct classes of proteolytic enzymes, metallo- and cysteine-type peptidases, that share common epitopes with leishmanolysin and cruzipain, respectively. The leishmanolysin-like and cruzipain-like molecules from P. serpens participate in several biological processes including cellular growth and adhesion to the salivary glands of Oncopeltus fasciatus, a phytophagous insect experimental model. Since previous reports demonstrated that immunization of mice with P. serpens induced a partial protective immune response against T. cruzi, this plant trypanosomatid may be a suitable candidate for vaccine studies. Moreover, comparative approaches in the Trypanosomatidae family may be useful to understand kinetoplastid biology, biochemistry and evolution.

  8. Evaluating gender similarities and differences using metasynthesis.

    Science.gov (United States)

    Zell, Ethan; Krizan, Zlatan; Teeter, Sabrina R

    2015-01-01

    Despite the common lay assumption that males and females are profoundly different, Hyde (2005) used data from 46 meta-analyses to demonstrate that males and females are highly similar. Nonetheless, the gender similarities hypothesis has remained controversial. Since Hyde's provocative report, there has been an explosion of meta-analytic interest in psychological gender differences. We utilized this enormous collection of 106 meta-analyses and 386 individual meta-analytic effects to reevaluate the gender similarities hypothesis. Furthermore, we employed a novel data-analytic approach called metasynthesis (Zell & Krizan, 2014) to estimate the average difference between males and females and to explore moderators of gender differences. The average, absolute difference between males and females across domains was relatively small (d = 0.21, SD = 0.14), with the majority of effects being either small (46%) or very small (39%). Magnitude of differences fluctuated somewhat as a function of the psychological domain (e.g., cognitive variables, social and personality variables, well-being), but remained largely constant across age, culture, and generations. These findings provide compelling support for the gender similarities hypothesis, but also underscore conditions under which gender differences are most pronounced. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  9. 3D Facial Similarity Measure Based on Geodesic Network and Curvatures

    Directory of Open Access Journals (Sweden)

    Junli Zhao

    2014-01-01

    Full Text Available Automated 3D facial similarity measure is a challenging and valuable research topic in anthropology and computer graphics. It is widely used in various fields, such as criminal investigation, kinship confirmation, and face recognition. This paper proposes a 3D facial similarity measure method based on a combination of geodesic and curvature features. Firstly, a geodesic network is generated for each face with geodesics and iso-geodesics determined and these network points are adopted as the correspondence across face models. Then, four metrics associated with curvatures, that is, the mean curvature, Gaussian curvature, shape index, and curvedness, are computed for each network point by using a weighted average of its neighborhood points. Finally, correlation coefficients according to these metrics are computed, respectively, as the similarity measures between two 3D face models. Experiments of different persons’ 3D facial models and different 3D facial models of the same person are implemented and compared with a subjective face similarity study. The results show that the geodesic network plays an important role in 3D facial similarity measure. The similarity measure defined by shape index is consistent with human’s subjective evaluation basically, and it can measure the 3D face similarity more objectively than the other indices.

  10. Towards a Semantic E-Learning Theory by Using a Modelling Approach

    Science.gov (United States)

    Yli-Luoma, Pertti V. J.; Naeve, Ambjorn

    2006-01-01

    In the present study, a semantic perspective on e-learning theory is advanced and a modelling approach is used. This modelling approach towards the new learning theory is based on the four SECI phases of knowledge conversion: Socialisation, Externalisation, Combination and Internalisation, introduced by Nonaka in 1994, and involving two levels of…

  11. Mathematical models for therapeutic approaches to control HIV disease transmission

    CERN Document Server

    Roy, Priti Kumar

    2015-01-01

    The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...

  12. A security modeling approach for web-service-based business processes

    DEFF Research Database (Denmark)

    Jensen, Meiko; Feja, Sven

    2009-01-01

    a transformation that automatically derives WS-SecurityPolicy-conformant security policies from the process model, which in conjunction with the generated WS-BPEL processes and WSDL documents provides the ability to deploy and run the complete security-enhanced process based on Web Service technology.......The rising need for security in SOA applications requires better support for management of non-functional properties in web-based business processes. Here, the model-driven approach may provide valuable benefits in terms of maintainability and deployment. Apart from modeling the pure functionality...... of a process, the consideration of security properties at the level of a process model is a promising approach. In this work-in-progress paper we present an extension to the ARIS SOA Architect that is capable of modeling security requirements as a separate security model view. Further we provide...

  13. Query Language for Location-Based Services: A Model Checking Approach

    Science.gov (United States)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  14. Box-wing model approach for solar radiation pressure modelling in a multi-GNSS scenario

    Science.gov (United States)

    Tobias, Guillermo; Jesús García, Adrián

    2016-04-01

    The solar radiation pressure force is the largest orbital perturbation after the gravitational effects and the major error source affecting GNSS satellites. A wide range of approaches have been developed over the years for the modelling of this non gravitational effect as part of the orbit determination process. These approaches are commonly divided into empirical, semi-analytical and analytical, where their main difference relies on the amount of knowledge of a-priori physical information about the properties of the satellites (materials and geometry) and their attitude. It has been shown in the past that the pre-launch analytical models fail to achieve the desired accuracy mainly due to difficulties in the extrapolation of the in-orbit optical and thermic properties, the perturbations in the nominal attitude law and the aging of the satellite's surfaces, whereas empirical models' accuracies strongly depend on the amount of tracking data used for deriving the models, and whose performances are reduced as the area to mass ratio of the GNSS satellites increases, as it happens for the upcoming constellations such as BeiDou and Galileo. This paper proposes to use basic box-wing model for Galileo complemented with empirical parameters, based on the limited available information about the Galileo satellite's geometry. The satellite is modelled as a box, representing the satellite bus, and a wing representing the solar panel. The performance of the model will be assessed for GPS, GLONASS and Galileo constellations. The results of the proposed approach have been analyzed over a one year period. In order to assess the results two different SRP models have been used. Firstly, the proposed box-wing model and secondly, the new CODE empirical model, ECOM2. The orbit performances of both models are assessed using Satellite Laser Ranging (SLR) measurements, together with the evaluation of the orbit prediction accuracy. This comparison shows the advantages and disadvantages of

  15. Application of the relative activity factor approach in scaling from heterologously expressed cytochromes p450 to human liver microsomes: studies on amitriptyline as a model substrate.

    Science.gov (United States)

    Venkatakrishnan, K; von Moltke, L L; Greenblatt, D J

    2001-04-01

    The relative activity factor (RAF) approach is being increasingly used in the quantitative phenotyping of multienzyme drug biotransformations. Using lymphoblast-expressed cytochromes P450 (CYPs) and the tricyclic antidepressant amitriptyline as a model substrate, we have tested the hypothesis that the human liver microsomal rates of a biotransformation mediated by multiple CYP isoforms can be mathematically reconstructed from the rates of the biotransformation catalyzed by individual recombinant CYPs using the RAF approach, and that the RAF approach can be used for the in vitro-in vivo scaling of pharmacokinetic clearance from in vitro intrinsic clearance measurements in heterologous expression systems. In addition, we have compared the results of two widely used methods of quantitative reaction phenotyping, namely, chemical inhibition studies and the prediction of relative contributions of individual CYP isoforms using the RAF approach. For the pathways of N-demethylation (mediated by CYPs 1A2, 2B6, 2C8, 2C9, 2C19, 2D6, and 3A4) and E-10 hydroxylation (mediated by CYPs 2B6, 2D6, and 3A4), the model-predicted biotransformation rates in microsomes from a panel of 12 human livers determined from enzyme kinetic parameters of the recombinant CYPs were similar to, and correlated with the observed rates. The model-predicted clearance via N-demethylation was 53% lower than the previously reported in vivo pharmacokinetic estimates. Model-predicted relative contributions of individual CYP isoforms to the net biotransformation rate were similar to, and correlated with the fractional decrement in human liver microsomal reaction rates by chemical inhibitors of the respective CYPs, provided the chemical inhibitors used were specific to their target CYP isoforms.

  16. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  17. A Bootstrap Approach of Benchmarking Organizational Maturity Model of Software Product With Educational Maturity Model

    OpenAIRE

    R.Manjula; J.Vaideeswaran

    2012-01-01

    This Software product line engineering is an inter-disciplinary concept. It spans the dimensions of business, architecture, process, and the organization. Similarly, Education System engineering is also an inter-disciplinary concept, which spans the dimensions of academic, infrastructure, facilities, administration etc. Some of the potential benefits of this approach include continuous improvements in System quality and adhering to global standards. The increasing competency in IT and Educati...

  18. Application of the principle of similarity fluid mechanics

    International Nuclear Information System (INIS)

    Hendricks, R.C.; Sengers, J.V.

    1979-01-01

    Possible applications of the principle of similarity to fluid mechanics is described and illustrated. In correlating thermophysical properties of fluids, the similarity principle transcends the traditional corresponding states principle. In fluid mechanics the similarity principle is useful in correlating flow processes that can be modeled adequately with one independent variable (i.e., one-dimensional flows). In this paper we explore the concept of transforming the conservation equations by combining similarity principles for thermophysical properties with those for fluid flow. We illustrate the usefulness of the procedure by applying such a transformation to calculate two phase critical mass flow through a nozzle

  19. Numerical modeling of hydrodynamics and sediment transport—an integrated approach

    Science.gov (United States)

    Gic-Grusza, Gabriela; Dudkowska, Aleksandra

    2017-10-01

    Point measurement-based estimation of bedload transport in the coastal zone is very difficult. The only way to assess the magnitude and direction of bedload transport in larger areas, particularly those characterized by complex bottom topography and hydrodynamics, is to use a holistic approach. This requires modeling of waves, currents, and the critical bed shear stress and bedload transport magnitude, with a due consideration to the realistic bathymetry and distribution of surface sediment types. Such a holistic approach is presented in this paper which describes modeling of bedload transport in the Gulf of Gdańsk. Extreme storm conditions defined based on 138-year NOAA data were assumed. The SWAN model (Booij et al. 1999) was used to define wind-wave fields, whereas wave-induced currents were calculated using the Kołodko and Gic-Grusza (2015) model, and the magnitude of bedload transport was estimated using the modified Meyer-Peter and Müller (1948) formula. The calculations were performed using a GIS model. The results obtained are innovative. The approach presented appears to be a valuable source of information on bedload transport in the coastal zone.

  20. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)

    2015-04-15

    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.