WorldWideScience

Sample records for modelling approach similar

  1. Bianchi VI0 and III models: self-similar approach

    International Nuclear Information System (INIS)

    Belinchon, Jose Antonio

    2009-01-01

    We study several cosmological models with Bianchi VI 0 and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and Λ. As in other studied models we find that the behaviour of G and Λ are related. If G behaves as a growing time function then Λ is a positive decreasing time function but if G is decreasing then Λ 0 is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  2. A Model-Based Approach to Constructing Music Similarity Functions

    Science.gov (United States)

    West, Kris; Lamere, Paul

    2006-12-01

    Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  3. A Model-Based Approach to Constructing Music Similarity Functions

    Directory of Open Access Journals (Sweden)

    Lamere Paul

    2007-01-01

    Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  4. Bianchi VI{sub 0} and III models: self-similar approach

    Energy Technology Data Exchange (ETDEWEB)

    Belinchon, Jose Antonio, E-mail: abelcal@ciccp.e [Departamento de Fisica, ETS Arquitectura, UPM, Av. Juan de Herrera 4, Madrid 28040 (Spain)

    2009-09-07

    We study several cosmological models with Bianchi VI{sub 0} and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and LAMBDA. As in other studied models we find that the behaviour of G and LAMBDA are related. If G behaves as a growing time function then LAMBDA is a positive decreasing time function but if G is decreasing then LAMBDA{sub 0} is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  5. Similarity and accuracy of mental models formed during nursing handovers: A concept mapping approach.

    Science.gov (United States)

    Drach-Zahavy, Anat; Broyer, Chaya; Dagan, Efrat

    2017-09-01

    Shared mental models are crucial for constructing mutual understanding of the patient's condition during a clinical handover. Yet, scant research, if any, has empirically explored mental models of the parties involved in a clinical handover. This study aimed to examine the similarities among mental models of incoming and outgoing nurses, and to test their accuracy by comparing them with mental models of expert nurses. A cross-sectional study, exploring nurses' mental models via the concept mapping technique. 40 clinical handovers. Data were collected via concept mapping of the incoming, outgoing, and expert nurses' mental models (total of 120 concept maps). Similarity and accuracy for concepts and associations indexes were calculated to compare the different maps. About one fifth of the concepts emerged in both outgoing and incoming nurses' concept maps (concept similarity=23%±10.6). Concept accuracy indexes were 35%±18.8 for incoming and 62%±19.6 for outgoing nurses' maps. Although incoming nurses absorbed fewer number of concepts and associations (23% and 12%, respectively), they partially closed the gap (35% and 22%, respectively) relative to expert nurses' maps. The correlations between concept similarities, and incoming as well as outgoing nurses' concept accuracy, were significant (r=0.43, p<0.01; r=0.68 p<0.01, respectively). Finally, in 90% of the maps, outgoing nurses added information concerning the processes enacted during the shift, beyond the expert nurses' gold standard. Two seemingly contradicting processes in the handover were identified. "Information loss", captured by the low similarity indexes among the mental models of incoming and outgoing nurses; and "information restoration", based on accuracy measures indexes among the mental models of the incoming nurses. Based on mental model theory, we propose possible explanations for these processes and derive implications for how to improve a clinical handover. Copyright © 2017 Elsevier Ltd. All

  6. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Directory of Open Access Journals (Sweden)

    Jin Dai

    2014-01-01

    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  7. Mathematical evaluation of similarity factor using various weighing approaches on aceclofenac marketed formulations by model-independent method.

    Science.gov (United States)

    Soni, T G; Desai, J U; Nagda, C D; Gandhi, T R; Chotai, N P

    2008-01-01

    The US Food and Drug Administration's (FDA's) guidance for industry on dissolution testing of immediate-release solid oral dosage forms describes that drug dissolution may be the rate limiting step for drug absorption in the case of low solubility/high permeability drugs (BCS class II drugs). US FDA Guidance describes the model-independent mathematical approach proposed by Moore and Flanner for calculating a similarity factor (f2) of dissolution across a suitable time interval. In the present study, the similarity factor was calculated on dissolution data of two marketed aceclofenac tablets (a BCS class II drug) using various weighing approaches proposed by Gohel et al. The proposed approaches were compared with a conventional approach (W = 1). On the basis of consideration of variability, preference is given in the order of approach 3 > approach 2 > approach 1 as approach 3 considers batch-to-batch as well as within-samples variability and shows best similarity profile. Approach 2 considers batch-to batch variability with higher specificity than approach 1.

  8. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2018-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  9. SIMILARITIES BETWEEN THE KNOWLEDGE CREATION AND CONVERSION MODEL AND THE COMPETING VALUES FRAMEWORK: AN INTEGRATIVE APPROACH

    Directory of Open Access Journals (Sweden)

    PAULO COSTA

    2016-12-01

    Full Text Available ABSTRACT Contemporaneously, and with the successive paradigmatic revolutions inherent to management since the XVII century, we are witnessing a new era marked by the structural rupture in the way organizations are perceived. Market globalization, cemented by quick technological evolutions, associated with economic, cultural, political and social transformations characterize a reality where uncertainty is the only certainty for organizations and managers. Knowledge management has been interpreted by managers and academics as a viable alternative in a logic of creation and conversation of sustainable competitive advantages. However, there are several barriers to the implementation and development of knowledge management programs in organizations, with organizational culture being one of the most preponderant. In this sense, and in this article, we will analyze and compare The Knowledge Creation and Conversion Model proposed by Nonaka and Takeuchi (1995 and Quinn and Rohrbaugh's Competing Values Framework (1983, since both have convergent conceptual lines that can assist managers in different sectors to guide their organization in a perspective of productivity, quality and market competitiveness.

  10. Modeling of similar economies

    Directory of Open Access Journals (Sweden)

    Sergey B. Kuznetsov

    2017-06-01

    Full Text Available Objective to obtain dimensionless criteria ndash economic indices characterizing the national economy and not depending on its size. Methods mathematical modeling theory of dimensions processing statistical data. Results basing on differential equations describing the national economy with the account of economical environment resistance two dimensionless criteria are obtained which allow to compare economies regardless of their sizes. With the theory of dimensions we show that the obtained indices are not accidental. We demonstrate the implementation of the obtained dimensionless criteria for the analysis of behavior of certain countriesrsquo economies. Scientific novelty the dimensionless criteria are obtained ndash economic indices which allow to compare economies regardless of their sizes and to analyze the dynamic changes in the economies with time. nbsp Practical significance the obtained results can be used for dynamic and comparative analysis of different countriesrsquo economies regardless of their sizes.

  11. Self-similar cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Chao, W Z [Cambridge Univ. (UK). Dept. of Applied Mathematics and Theoretical Physics

    1981-07-01

    The kinematics and dynamics of self-similar cosmological models are discussed. The degrees of freedom of the solutions of Einstein's equations for different types of models are listed. The relation between kinematic quantities and the classifications of the self-similarity group is examined. All dust local rotational symmetry models have been found.

  12. Notions of similarity for systems biology models.

    Science.gov (United States)

    Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knüpfer, Christian; Liebermeister, Wolfram; Waltemath, Dagmar

    2018-01-01

    Systems biology models are rapidly increasing in complexity, size and numbers. When building large models, researchers rely on software tools for the retrieval, comparison, combination and merging of models, as well as for version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of 'similarity' may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here we survey existing methods for the comparison of models, introduce quantitative measures for model similarity, and discuss potential applications of combined similarity measures. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on a combination of different model aspects. The six aspects that we define as potentially relevant for similarity are underlying encoding, references to biological entities, quantitative behaviour, qualitative behaviour, mathematical equations and parameters and network structure. We argue that future similarity measures will benefit from combining these model aspects in flexible, problem-specific ways to mimic users' intuition about model similarity, and to support complex model searches in databases. © The Author 2016. Published by Oxford University Press.

  13. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  14. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  15. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  16. Similarity and Modeling in Science and Engineering

    CERN Document Server

    Kuneš, Josef

    2012-01-01

    The present text sets itself in relief to other titles on the subject in that it addresses the means and methodologies versus a narrow specific-task oriented approach. Concepts and their developments which evolved to meet the changing needs of applications are addressed. This approach provides the reader with a general tool-box to apply to their specific needs. Two important tools are presented: dimensional analysis and the similarity analysis methods. The fundamental point of view, enabling one to sort all models, is that of information flux between a model and an original expressed by the similarity and abstraction. Each chapter includes original examples and ap-plications. In this respect, the models can be divided into several groups. The following models are dealt with separately by chapter; mathematical and physical models, physical analogues, deterministic, stochastic, and cybernetic computer models. The mathematical models are divided into asymptotic and phenomenological models. The phenomenological m...

  17. Lagrangian-similarity diffusion-deposition model

    International Nuclear Information System (INIS)

    Horst, T.W.

    1979-01-01

    A Lagrangian-similarity diffusion model has been incorporated into the surface-depletion deposition model. This model predicts vertical concentration profiles far downwind of the source that agree with those of a one-dimensional gradient-transfer model

  18. On self-similar Tolman models

    International Nuclear Information System (INIS)

    Maharaj, S.D.

    1988-01-01

    The self-similar spherically symmetric solutions of the Einstein field equation for the case of dust are identified. These form a subclass of the Tolman models. These self-similar models contain the solution recently presented by Chi [J. Math. Phys. 28, 1539 (1987)], thereby refuting the claim of having found a new solution to the Einstein field equations

  19. A Novel Hybrid Similarity Calculation Model

    Directory of Open Access Journals (Sweden)

    Xiaoping Fan

    2017-01-01

    Full Text Available This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.

  20. Similar words analysis based on POS-CBOW language model

    Directory of Open Access Journals (Sweden)

    Dongru RUAN

    2015-10-01

    Full Text Available Similar words analysis is one of the important aspects in the field of natural language processing, and it has important research and application values in text classification, machine translation and information recommendation. Focusing on the features of Sina Weibo's short text, this paper presents a language model named as POS-CBOW, which is a kind of continuous bag-of-words language model with the filtering layer and part-of-speech tagging layer. The proposed approach can adjust the word vectors' similarity according to the cosine similarity and the word vectors' part-of-speech metrics. It can also filter those similar words set on the base of the statistical analysis model. The experimental result shows that the similar words analysis algorithm based on the proposed POS-CBOW language model is better than that based on the traditional CBOW language model.

  1. Modeling Timbre Similarity of Short Music Clips.

    Science.gov (United States)

    Siedenburg, Kai; Müllensiefen, Daniel

    2017-01-01

    There is evidence from a number of recent studies that most listeners are able to extract information related to song identity, emotion, or genre from music excerpts with durations in the range of tenths of seconds. Because of these very short durations, timbre as a multifaceted auditory attribute appears as a plausible candidate for the type of features that listeners make use of when processing short music excerpts. However, the importance of timbre in listening tasks that involve short excerpts has not yet been demonstrated empirically. Hence, the goal of this study was to develop a method that allows to explore to what degree similarity judgments of short music clips can be modeled with low-level acoustic features related to timbre. We utilized the similarity data from two large samples of participants: Sample I was obtained via an online survey, used 16 clips of 400 ms length, and contained responses of 137,339 participants. Sample II was collected in a lab environment, used 16 clips of 800 ms length, and contained responses from 648 participants. Our model used two sets of audio features which included commonly used timbre descriptors and the well-known Mel-frequency cepstral coefficients as well as their temporal derivates. In order to predict pairwise similarities, the resulting distances between clips in terms of their audio features were used as predictor variables with partial least-squares regression. We found that a sparse selection of three to seven features from both descriptor sets-mainly encoding the coarse shape of the spectrum as well as spectrotemporal variability-best predicted similarities across the two sets of sounds. Notably, the inclusion of non-acoustic predictors of musical genre and record release date allowed much better generalization performance and explained up to 50% of shared variance ( R 2 ) between observations and model predictions. Overall, the results of this study empirically demonstrate that both acoustic features related

  2. Identifying Similarities in Cognitive Subtest Functional Requirements: An Empirical Approach

    Science.gov (United States)

    Frisby, Craig L.; Parkin, Jason R.

    2007-01-01

    In the cognitive test interpretation literature, a Rational/Intuitive, Indirect Empirical, or Combined approach is typically used to construct conceptual taxonomies of the functional (behavioral) similarities between subtests. To address shortcomings of these approaches, the functional requirements for 49 subtests from six individually…

  3. Phishing Detection: Analysis of Visual Similarity Based Approaches

    Directory of Open Access Journals (Sweden)

    Ankit Kumar Jain

    2017-01-01

    Full Text Available Phishing is one of the major problems faced by cyber-world and leads to financial losses for both industries and individuals. Detection of phishing attack with high accuracy has always been a challenging issue. At present, visual similarities based techniques are very useful for detecting phishing websites efficiently. Phishing website looks very similar in appearance to its corresponding legitimate website to deceive users into believing that they are browsing the correct website. Visual similarity based phishing detection techniques utilise the feature set like text content, text format, HTML tags, Cascading Style Sheet (CSS, image, and so forth, to make the decision. These approaches compare the suspicious website with the corresponding legitimate website by using various features and if the similarity is greater than the predefined threshold value then it is declared phishing. This paper presents a comprehensive analysis of phishing attacks, their exploitation, some of the recent visual similarity based approaches for phishing detection, and its comparative study. Our survey provides a better understanding of the problem, current solution space, and scope of future research to deal with phishing attacks efficiently using visual similarity based approaches.

  4. Examining Similarity Structure: Multidimensional Scaling and Related Approaches in Neuroimaging

    Directory of Open Access Journals (Sweden)

    Svetlana V. Shinkareva

    2013-01-01

    Full Text Available This paper covers similarity analyses, a subset of multivariate pattern analysis techniques that are based on similarity spaces defined by multivariate patterns. These techniques offer several advantages and complement other methods for brain data analyses, as they allow for comparison of representational structure across individuals, brain regions, and data acquisition methods. Particular attention is paid to multidimensional scaling and related approaches that yield spatial representations or provide methods for characterizing individual differences. We highlight unique contributions of these methods by reviewing recent applications to functional magnetic resonance imaging data and emphasize areas of caution in applying and interpreting similarity analysis methods.

  5. A Quantum Approach to Subset-Sum and Similar Problems

    OpenAIRE

    Daskin, Ammar

    2017-01-01

    In this paper, we study the subset-sum problem by using a quantum heuristic approach similar to the verification circuit of quantum Arthur-Merlin games. Under described certain assumptions, we show that the exact solution of the subset sum problem my be obtained in polynomial time and the exponential speed-up over the classical algorithms may be possible. We give a numerical example and discuss the complexity of the approach and its further application to the knapsack problem.

  6. Similarity search of business process models

    NARCIS (Netherlands)

    Dumas, M.; García-Bañuelos, L.; Dijkman, R.M.

    2009-01-01

    Similarity search is a general class of problems in which a given object, called a query object, is compared against a collection of objects in order to retrieve those that most closely resemble the query object. This paper reviews recent work on an instance of this class of problems, where the

  7. Measuring similarity between business process models

    NARCIS (Netherlands)

    Dongen, van B.F.; Dijkman, R.M.; Mendling, J.

    2007-01-01

    Quality aspects become increasingly important when business process modeling is used in a large-scale enterprise setting. In order to facilitate a storage without redundancy and an efficient retrieval of relevant process models in model databases it is required to develop a theoretical understanding

  8. Quasi-Similarity Model of Synthetic Jets

    Czech Academy of Sciences Publication Activity Database

    Tesař, Václav; Kordík, Jozef

    2009-01-01

    Roč. 149, č. 2 (2009), s. 255-265 ISSN 0924-4247 R&D Projects: GA AV ČR IAA200760705; GA ČR GA101/07/1499 Institutional research plan: CEZ:AV0Z20760514 Keywords : jets * synthetic jets * similarity solution Subject RIV: BK - Fluid Dynamics Impact factor: 1.674, year: 2009 http://www.sciencedirect.com

  9. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-01-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  10. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-12-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  11. Towards Modelling Variation in Music as Foundation for Similarity

    NARCIS (Netherlands)

    Volk, A.; de Haas, W.B.; van Kranenburg, P.; Cambouropoulos, E.; Tsougras, C.; Mavromatis, P.; Pastiadis, K.

    2012-01-01

    This paper investigates the concept of variation in music from the perspective of music similarity. Music similarity is a central concept in Music Information Retrieval (MIR), however there exists no comprehensive approach to music similarity yet. As a consequence, MIR faces the challenge on how to

  12. Vere-Jones' self-similar branching model

    International Nuclear Information System (INIS)

    Saichev, A.; Sornette, D.

    2005-01-01

    Motivated by its potential application to earthquake statistics as well as for its intrinsic interest in the theory of branching processes, we study the exactly self-similar branching process introduced recently by Vere-Jones. This model extends the ETAS class of conditional self-excited branching point-processes of triggered seismicity by removing the problematic need for a minimum (as well as maximum) earthquake size. To make the theory convergent without the need for the usual ultraviolet and infrared cutoffs, the distribution of magnitudes m ' of daughters of first-generation of a mother of magnitude m has two branches m ' ' >m with exponent β+d, where β and d are two positive parameters. We investigate the condition and nature of the subcritical, critical, and supercritical regime in this and in an extended version interpolating smoothly between several models. We predict that the distribution of magnitudes of events triggered by a mother of magnitude m over all generations has also two branches m ' ' >m with exponent β+h, with h=d√(1-s), where s is the fraction of triggered events. This corresponds to a renormalization of the exponent d into h by the hierarchy of successive generations of triggered events. For a significant part of the parameter space, the distribution of magnitudes over a full catalog summed over an average steady flow of spontaneous sources (immigrants) reproduces the distribution of the spontaneous sources with a single branch and is blind to the exponents β,d of the distribution of triggered events. Since the distribution of earthquake magnitudes is usually obtained with catalogs including many sequences, we conclude that the two branches of the distribution of aftershocks are not directly observable and the model is compatible with real seismic catalogs. In summary, the exactly self-similar Vere-Jones model provides an attractive new approach to model triggered seismicity, which alleviates delicate questions on the role of

  13. Similarity-Based Unification: A Multi-Adjoint Approach

    Czech Academy of Sciences Publication Activity Database

    Medina, J.; Ojeda-Aciego, M.; Vojtáš, Peter

    2004-01-01

    Roč. 146, č. 1 (2004), s. 43-62 ISSN 0165-0114 Source of funding: V - iné verejné zdroje Keywords : similarity * fuzzy unification Subject RIV: BA - General Mathematics Impact factor: 0.734, year: 2004

  14. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert; Gruenberger, Michael; Gkoutos, Georgios V; Schofield, Paul N

    2015-01-01

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions

  15. Rio De Janeiro and Medellin: Similar Challenges, Different Approaches

    Science.gov (United States)

    2016-03-01

    Model rejects traditional policing relationship between police and citizens to ensure that crime 233... pedagogical strategies to encourage citizen participation, a culture of respect for life, legality, self-regulation, matters pertaining to the Government, and

  16. Similarity between neonatal profile and socioeconomic index: a spatial approach

    Directory of Open Access Journals (Sweden)

    d'Orsi Eleonora

    2005-01-01

    Full Text Available This study aims to compare neonatal characteristics and socioeconomic conditions in Rio de Janeiro city neighborhoods in order to identify priority areas for intervention. The study design was ecological. Two databases were used: the Brazilian Population Census and the Live Birth Information System, aggregated by neighborhoods. Spatial analysis, multivariate cluster classification, and Moran's I statistics for detection of spatial clustering were used. A similarity index was created to compare socioeconomic clusters with the neonatal profile in each neighborhood. The proportions of Apgar score above 8 and cesarean sections showed positive spatial correlation and high similarity with the socioeconomic index. The proportion of low birth weight infants showed a random spatial distribution, indicating that at this scale of analysis, birth weight is not sufficiently sensitive to discriminate subtler differences among population groups. The observed relationship between the neighborhoods' neonatal profile (particularly Apgar score and mode of delivery and socioeconomic conditions shows evidence of a change in infant health profile, where the possibility for intervention shifts to medical services and the Apgar score assumes growing significance as a risk indicator.

  17. Self-Similar Symmetry Model and Cosmic Microwave Background

    Directory of Open Access Journals (Sweden)

    Tomohide eSonoda

    2016-05-01

    Full Text Available In this paper, we present the self-similar symmetry (SSS model that describes the hierarchical structure of the universe. The model is based on the concept of self-similarity, which explains the symmetry of the cosmic microwave background (CMB. The approximate length and time scales of the six hierarchies of the universe---grand unification, electroweak unification, the atom, the pulsar, the solar system, and the galactic system---are derived from the SSS model. In addition, the model implies that the electron mass and gravitational constant could vary with the CMB radiation temperature.

  18. Simulation and similarity using models to understand the world

    CERN Document Server

    Weisberg, Michael

    2013-01-01

    In the 1950s, John Reber convinced many Californians that the best way to solve the state's water shortage problem was to dam up the San Francisco Bay. Against massive political pressure, Reber's opponents persuaded lawmakers that doing so would lead to disaster. They did this not by empirical measurement alone, but also through the construction of a model. Simulation and Similarity explains why this was a good strategy while simultaneously providing an account of modeling and idealization in modern scientific practice. Michael Weisberg focuses on concrete, mathematical, and computational models in his consideration of the nature of models, the practice of modeling, and nature of the relationship between models and real-world phenomena. In addition to a careful analysis of physical, computational, and mathematical models, Simulation and Similarity offers a novel account of the model/world relationship. Breaking with the dominant tradition, which favors the analysis of this relation through logical notions suc...

  19. Embedding Term Similarity and Inverse Document Frequency into a Logical Model of Information Retrieval.

    Science.gov (United States)

    Losada, David E.; Barreiro, Alvaro

    2003-01-01

    Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

  20. Inter Genre Similarity Modelling For Automatic Music Genre Classification

    OpenAIRE

    Bagci, Ulas; Erzin, Engin

    2009-01-01

    Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population....

  1. Perception of similarity: a model for social network dynamics

    International Nuclear Information System (INIS)

    Javarone, Marco Alberto; Armano, Giuliano

    2013-01-01

    Some properties of social networks (e.g., the mixing patterns and the community structure) appear deeply influenced by the individual perception of people. In this work we map behaviors by considering similarity and popularity of people, also assuming that each person has his/her proper perception and interpretation of similarity. Although investigated in different ways (depending on the specific scientific framework), from a computational perspective similarity is typically calculated as a distance measure. In accordance with this view, to represent social network dynamics we developed an agent-based model on top of a hyperbolic space on which individual distance measures are calculated. Simulations, performed in accordance with the proposed model, generate small-world networks that exhibit a community structure. We deem this model to be valuable for analyzing the relevant properties of real social networks. (paper)

  2. Brazilian N2 laser similar to imported models

    International Nuclear Information System (INIS)

    Santos, P.A.M. dos; Tavares Junior, A.D.; Silva Reis, H. da; Tagliaferri, A.A.; Massone, C.A.

    1981-09-01

    The development of a high power N 2 Laser, similar to imported models but built enterely with Brazilian materials is described. The prototype shows pulse repetitivity that varies from 1 to 50 per second and has a peak power of 500 kW. (Author) [pt

  3. Distributional and Knowledge-Based Approaches for Computing Portuguese Word Similarity

    Directory of Open Access Journals (Sweden)

    Hugo Gonçalo Oliveira

    2018-02-01

    Full Text Available Identifying similar and related words is not only key in natural language understanding but also a suitable task for assessing the quality of computational resources that organise words and meanings of a language, compiled by different means. This paper, which aims to be a reference for those interested in computing word similarity in Portuguese, presents several approaches for this task and is motivated by the recent availability of state-of-the-art distributional models of Portuguese words, which add to several lexical knowledge bases (LKBs for this language, available for a longer time. The previous resources were exploited to answer word similarity tests, which also became recently available for Portuguese. We conclude that there are several valid approaches for this task, but not one that outperforms all the others in every single test. Distributional models seem to capture relatedness better, while LKBs are better suited for computing genuine similarity, but, in general, better results are obtained when knowledge from different sources is combined.

  4. Morphological similarities between DBM and a microeconomic model of sprawl

    Science.gov (United States)

    Caruso, Geoffrey; Vuidel, Gilles; Cavailhès, Jean; Frankhauser, Pierre; Peeters, Dominique; Thomas, Isabelle

    2011-03-01

    We present a model that simulates the growth of a metropolitan area on a 2D lattice. The model is dynamic and based on microeconomics. Households show preferences for nearby open spaces and neighbourhood density. They compete on the land market. They travel along a road network to access the CBD. A planner ensures the connectedness and maintenance of the road network. The spatial pattern of houses, green spaces and road network self-organises, emerging from agents individualistic decisions. We perform several simulations and vary residential preferences. Our results show morphologies and transition phases that are similar to Dieletric Breakdown Models (DBM). Such similarities were observed earlier by other authors, but we show here that it can be deducted from the functioning of the land market and thus explicitly connected to urban economic theory.

  5. Numerical study of similarity in prototype and model pumped turbines

    International Nuclear Information System (INIS)

    Li, Z J; Wang, Z W; Bi, H L

    2014-01-01

    Similarity study of prototype and model pumped turbines are performed by numerical simulation and the partial discharge case is analysed in detail. It is found out that in the RSI (rotor-stator interaction) region where the flow is convectively accelerated with minor flow separation, a high level of similarity in flow patterns and pressure fluctuation appear with relative pressure fluctuation amplitude of model turbine slightly higher than that of prototype turbine. As for the condition in the runner where the flow is convectively accelerated with severe separation, similarity fades substantially due to different topology of flow separation and vortex formation brought by distinctive Reynolds numbers of the two turbines. In the draft tube where the flow is diffusively decelerated, similarity becomes debilitated owing to different vortex rope formation impacted by Reynolds number. It is noted that the pressure fluctuation amplitude and characteristic frequency of model turbine are larger than those of prototype turbine. The differences in pressure fluctuation characteristics are discussed theoretically through dimensionless Navier-Stokes equation. The above conclusions are all made based on simulation without regard to the penstock response and resonance

  6. a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds

    Science.gov (United States)

    He, H.; Khoshelham, K.; Fraser, C.

    2017-09-01

    Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  7. A TWO-STEP CLASSIFICATION APPROACH TO DISTINGUISHING SIMILAR OBJECTS IN MOBILE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    H. He

    2017-09-01

    Full Text Available Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  8. APPLICABILITY OF SIMILARITY CONDITIONS TO ANALOGUE MODELLING OF TECTONIC STRUCTURES

    Directory of Open Access Journals (Sweden)

    Mikhail A. Goncharov

    2010-01-01

    Full Text Available The publication is aimed at comparing concepts of V.V. Belousov and M.V. Gzovsky, outstanding researchers who established fundamentals of tectonophysics in Russia, specifically similarity conditions in application to tectonophysical modeling. Quotations from their publications illustrate differences in their views. In this respect, we can reckon V.V. Belousov as a «realist» as he supported «the liberal point of view» [Methods of modelling…, 1988, p. 21–22], whereas M.V. Gzovsky can be regarded as an «idealist» as he believed that similarity conditions should be mandatorily applied to ensure correctness of physical modeling of tectonic deformations and structures [Gzovsky, 1975, pp. 88 and 94].Objectives of the present publication are (1 to be another reminder about desirability of compliance with similarity conditions in experimental tectonics; (2 to point out difficulties in ensuring such compliance; (3 to give examples which bring out the fact that similarity conditions are often met per se, i.e. automatically observed; (4 to show that modeling can be simplified in some cases without compromising quantitative estimations of parameters of structure formation.(1 Physical modelling of tectonic deformations and structures should be conducted, if possible, in compliance with conditions of geometric and physical similarity between experimental models and corresponding natural objects. In any case, a researcher should have a clear vision of conditions applicable to each particular experiment.(2 Application of similarity conditions is often challenging due to unavoidable difficulties caused by the following: a Imperfection of experimental equipment and technologies (Fig. 1 to 3; b uncertainties in estimating parameters of formation of natural structures, including main ones: structure size (Fig. 4, time of formation (Fig. 5, deformation properties of the medium wherein such structures are formed, including, first of all, viscosity (Fig. 6

  9. The continuous similarity model of bulk soil-water evaporation

    Science.gov (United States)

    Clapp, R. B.

    1983-01-01

    The continuous similarity model of evaporation is described. In it, evaporation is conceptualized as a two stage process. For an initially moist soil, evaporation is first climate limited, but later it becomes soil limited. During the latter stage, the evaporation rate is termed evaporability, and mathematically it is inversely proportional to the evaporation deficit. A functional approximation of the moisture distribution within the soil column is also included in the model. The model was tested using data from four experiments conducted near Phoenix, Arizona; and there was excellent agreement between the simulated and observed evaporation. The model also predicted the time of transition to the soil limited stage reasonably well. For one of the experiments, a third stage of evaporation, when vapor diffusion predominates, was observed. The occurrence of this stage was related to the decrease in moisture at the surface of the soil. The continuous similarity model does not account for vapor flow. The results show that climate, through the potential evaporation rate, has a strong influence on the time of transition to the soil limited stage. After this transition, however, bulk evaporation is independent of climate until the effects of vapor flow within the soil predominate.

  10. Self-similar two-particle separation model

    DEFF Research Database (Denmark)

    Lüthi, Beat; Berg, Jacob; Ott, Søren

    2007-01-01

    .g.; in the inertial range as epsilon−1/3r2/3. Particle separation is modeled as a Gaussian process without invoking information of Eulerian acceleration statistics or of precise shapes of Eulerian velocity distribution functions. The time scale is a function of S2(r) and thus of the Lagrangian evolving separation......We present a new stochastic model for relative two-particle separation in turbulence. Inspired by material line stretching, we suggest that a similar process also occurs beyond the viscous range, with time scaling according to the longitudinal second-order structure function S2(r), e....... The model predictions agree with numerical and experimental results for various initial particle separations. We present model results for fixed time and fixed scale statistics. We find that for the Richardson-Obukhov law, i.e., =gepsilont3, to hold and to also be observed in experiments, high Reynolds...

  11. MAC/FAC: A Model of Similarity-Based Retrieval

    Science.gov (United States)

    1994-10-01

    Grapes (0.28) 327 Sour Grapes, analog The Taming of the Shrew (0.22), Merry Wives 251 (0.18), S[11 stories], Sour Grapes (-0.19) Sour Grapes, literal... The Institute for the 0 1 Learning Sciences Northwestern University CD• 00 MAC/FAC: A MODEL OF SIMILARITY-BASED RETRIEVAL Kenneth D. Forbus Dedre...Gentner Keith Law Technical Report #59 • October 1994 94-35188 wit Establisthed in 1989 with the support of Andersen Consulting Form Approved REPORT

  12. In-Medium Similarity Renormalization Group Approach to the Nuclear Many-Body Problem

    Science.gov (United States)

    Hergert, Heiko; Bogner, Scott K.; Lietz, Justin G.; Morris, Titus D.; Novario, Samuel J.; Parzuchowski, Nathan M.; Yuan, Fei

    We present a pedagogical discussion of Similarity Renormalization Group (SRG) methods, in particular the In-Medium SRG (IMSRG) approach for solving the nuclear many-body problem. These methods use continuous unitary transformations to evolve the nuclear Hamiltonian to a desired shape. The IMSRG, in particular, is used to decouple the ground state from all excitations and solve the many-body Schrödinger equation. We discuss the IMSRG formalism as well as its numerical implementation, and use the method to study the pairing model and infinite neutron matter. We compare our results with those of Coupled cluster theory (Chap. 8), Configuration-Interaction Monte Carlo (Chap. 9), and the Self-Consistent Green's Function approach discussed in Chap. 11 The chapter concludes with an expanded overview of current research directions, and a look ahead at upcoming developments.

  13. A self-similar magnetohydrodynamic model for ball lightnings

    International Nuclear Information System (INIS)

    Tsui, K. H.

    2006-01-01

    Ball lightning is modeled by magnetohydrodynamic (MHD) equations in two-dimensional spherical geometry with azimuthal symmetry. Dynamic evolutions in the radial direction are described by the self-similar evolution function y(t). The plasma pressure, mass density, and magnetic fields are solved in terms of the radial label η. This model gives spherical MHD plasmoids with axisymmetric force-free magnetic field, and spherically symmetric plasma pressure and mass density, which self-consistently determine the polytropic index γ. The spatially oscillating nature of the radial and meridional field structures indicate embedded regions of closed field lines. These regions are named secondary plasmoids, whereas the overall self-similar spherical structure is named the primary plasmoid. According to this model, the time evolution function allows the primary plasmoid expand outward in two modes. The corresponding ejection of the embedded secondary plasmoids results in ball lightning offering an answer as how they come into being. The first is an accelerated expanding mode. This mode appears to fit plasmoids ejected from thundercloud tops with acceleration to ionosphere seen in high altitude atmospheric observations of sprites and blue jets. It also appears to account for midair high-speed ball lightning overtaking airplanes, and ground level high-speed energetic ball lightning. The second is a decelerated expanding mode, and it appears to be compatible to slowly moving ball lightning seen near ground level. The inverse of this second mode corresponds to an accelerated inward collapse, which could bring ball lightning to an end sometimes with a cracking sound

  14. A study of concept-based similarity approaches for recommending program examples

    Science.gov (United States)

    Hosseini, Roya; Brusilovsky, Peter

    2017-07-01

    This paper investigates a range of concept-based example recommendation approaches that we developed to provide example-based problem-solving support in the domain of programming. The goal of these approaches is to offer students a set of most relevant remedial examples when they have trouble solving a code comprehension problem where students examine a program code to determine its output or the final value of a variable. In this paper, we use the ideas of semantic-level similarity-based linking developed in the area of intelligent hypertext to generate examples for the given problem. To determine the best-performing approach, we explored two groups of similarity approaches for selecting examples: non-structural approaches focusing on examples that are similar to the problem in terms of concept coverage and structural approaches focusing on examples that are similar to the problem by the structure of the content. We also explored the value of personalized example recommendation based on student's knowledge levels and learning goal of the exercise. The paper presents concept-based similarity approaches that we developed, explains the data collection studies and reports the result of comparative analysis. The results of our analysis showed better ranking performance of the personalized structural variant of cosine similarity approach.

  15. Linear pharmacokinetic parameters for monoclonal antibodies are similar within a species and across different pharmacological targets: A comparison between human, cynomolgus monkey and hFcRn Tg32 transgenic mouse using a population-modeling approach.

    Science.gov (United States)

    Betts, Alison; Keunecke, Anne; van Steeg, Tamara J; van der Graaf, Piet H; Avery, Lindsay B; Jones, Hannah; Berkhout, Jan

    2018-04-10

    The linear pharmacokinetics (PK) of therapeutic monoclonal antibodies (mAbs) can be considered a class property with values that are similar to endogenous IgG. Knowledge of these parameters across species could be used to avoid unnecessary in vivo PK studies and to enable early PK predictions and pharmacokinetic/pharmacodynamic (PK/PD) simulations. In this work, population-pharmacokinetic (popPK) modeling was used to determine a single set of 'typical' popPK parameters describing the linear PK of mAbs in human, cynomolgus monkey and transgenic mice expressing the human neonatal Fc receptor (hFcRn Tg32), using a rich dataset of 27 mAbs. Non-linear PK was excluded from the datasets and a 2-compartment model was applied to describe mAb disposition. Typical human popPK estimates compared well with data from comparator mAbs with linear PK in the clinic. Outliers with higher than typical clearance were found to have non-specific interactions in an affinity-capture self-interaction nanoparticle spectroscopy assay, offering a potential tool to screen out these mAbs at an early stage. Translational strategies were investigated for prediction of human linear PK of mAbs, including use of typical human popPK parameters and allometric exponents from cynomolgus monkey and Tg32 mouse. Each method gave good prediction of human PK with parameters predicted within 2-fold. These strategies offer alternative options to the use of cynomolgus monkeys for human PK predictions of linear mAbs, based on in silico methods (typical human popPK parameters) or using a rodent species (Tg32 mouse), and call into question the value of completing extensive in vivo preclinical PK to inform linear mAb PK.

  16. Using a Similarity Matrix Approach to Evaluate the Accuracy of Rescaled Maps

    Directory of Open Access Journals (Sweden)

    Peijun Sun

    2018-03-01

    Full Text Available Rescaled maps have been extensively utilized to provide data at the appropriate spatial resolution for use in various Earth science models. However, a simple and easy way to evaluate these rescaled maps has not been developed. We propose a similarity matrix approach using a contingency table to compute three measures: overall similarity (OS, omission error (OE, and commission error (CE to evaluate the rescaled maps. The Majority Rule Based aggregation (MRB method was employed to produce the upscaled maps to demonstrate this approach. In addition, previously created, coarser resolution land cover maps from other research projects were also available for comparison. The question of which is better, a map initially produced at coarse resolution or a fine resolution map rescaled to a coarse resolution, has not been quantitatively investigated. To address these issues, we selected study sites at three different extent levels. First, we selected twelve regions covering the continental USA, then we selected nine states (from the whole continental USA, and finally we selected nine Agriculture Statistical Districts (ASDs (from within the nine selected states as study sites. Crop/non-crop maps derived from the USDA Crop Data Layer (CDL at 30 m as base maps were used for the upscaling and existing maps at 250 m and 1 km were utilized for the comparison. The results showed that a similarity matrix can effectively provide the map user with the information needed to assess the rescaling. Additionally, the upscaled maps can provide higher accuracy and better represent landscape pattern compared to the existing coarser maps. Therefore, we strongly recommend that an evaluation of the upscaled map and the existing coarser resolution map using a similarity matrix should be conducted before deciding which dataset to use for the modelling. Overall, extending our understanding on how to perform an evaluation of the rescaled map and investigation of the applicability

  17. A Novel Approach to Semantic Similarity Measurement Based on a Weighted Concept Lattice: Exemplifying Geo-Information

    Directory of Open Access Journals (Sweden)

    Jia Xiao

    2017-11-01

    Full Text Available The measurement of semantic similarity has been widely recognized as having a fundamental and key role in information science and information systems. Although various models have been proposed to measure semantic similarity, these models are not able effectively to quantify the weights of relevant factors that impact on the judgement of semantic similarity, such as the attributes of concepts, application context, and concept hierarchy. In this paper, we propose a novel approach that comprehensively considers the effects of various factors on semantic similarity judgment, which we name semantic similarity measurement based on a weighted concept lattice (SSMWCL. A feature model and network model are integrated together in SSMWCL. Based on the feature model, the combined weight of each attribute of the concepts is calculated by merging its information entropy and inclusion-degree importance in a specific application context. By establishing the weighted concept lattice, the relative hierarchical depths of concepts for comparison are computed according to the principle of the network model. The integration of feature model and network model enables SSMWCL to take account of differences in concepts more comprehensively in semantic similarity measurement. Additionally, a workflow of SSMWCL is designed to demonstrate these procedures and a case study of geo-information is conducted to assess the approach.

  18. Meeting your match: How attractiveness similarity affects approach behavior in mixed-sex dyads

    NARCIS (Netherlands)

    Straaten, I. van; Engels, R.C.M.E.; Finkenauer, C.; Holland, R.W.

    2009-01-01

    This experimental study investigated approach behavior toward opposite-sex others of similar versus dissimilar physical attractiveness. Furthermore, it tested the moderating effects of sex. Single participants interacted with confederates of high and low attractiveness. Observers rated their

  19. Extending the similarity-based XML multicast approach with digital signatures

    DEFF Research Database (Denmark)

    Azzini, Antonia; Marrara, Stefania; Jensen, Meiko

    2009-01-01

    This paper investigates the interplay between similarity-based SOAP message aggregation and digital signature application. An overview on the approaches resulting from the different orders for the tasks of signature application, verification, similarity aggregation and splitting is provided....... Depending on the intersection between similarity-aggregated and signed SOAP message parts, the paper discusses three different cases of signature application, and sketches their applicability and performance implications....

  20. Similarities between obesity in pets and children: the addiction model.

    Science.gov (United States)

    Pretlow, Robert A; Corbee, Ronald J

    2016-09-01

    Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest short-term but poor long-term results. New strategies are needed. A novel theory posits that obesity in pets and children is due to 'treats' and excessive meal amounts given by the 'pet-parent' and child-parent to obtain affection from the pet/child, which enables 'eating addiction' in the pet/child and results in parental 'co-dependence'. Pet-parents and child-parents may even become hostage to the treats/food to avoid the ire of the pet/child. Eating addiction in the pet/child also may be brought about by emotional factors such as stress, independent of parental co-dependence. An applicable treatment for child obesity has been trialled using classic addiction withdrawal/abstinence techniques, as well as behavioural addiction methods, with significant results. Both the child and the parent progress through withdrawal from specific 'problem foods', next from snacking (non-specific foods) and finally from excessive portions at meals (gradual reductions). This approach should adapt well for pets and pet-parents. Pet obesity is more 'pure' than child obesity, in that contributing factors and treatment points are essentially under the control of the pet-parent. Pet obesity might thus serve as an ideal test bed for the treatment and prevention of child obesity, with focus primarily on parental behaviours. Sharing information between the fields of pet and child obesity would be mutually beneficial.

  1. QSAR models based on quantum topological molecular similarity.

    Science.gov (United States)

    Popelier, P L A; Smith, P J

    2006-07-01

    A new method called quantum topological molecular similarity (QTMS) was fairly recently proposed [J. Chem. Inf. Comp. Sc., 41, 2001, 764] to construct a variety of medicinal, ecological and physical organic QSAR/QSPRs. QTMS method uses quantum chemical topology (QCT) to define electronic descriptors drawn from modern ab initio wave functions of geometry-optimised molecules. It was shown that the current abundance of computing power can be utilised to inject realistic descriptors into QSAR/QSPRs. In this article we study seven datasets of medicinal interest : the dissociation constants (pK(a)) for a set of substituted imidazolines , the pK(a) of imidazoles , the ability of a set of indole derivatives to displace [(3)H] flunitrazepam from binding to bovine cortical membranes , the influenza inhibition constants for a set of benzimidazoles , the interaction constants for a set of amides and the enzyme liver alcohol dehydrogenase , the natriuretic activity of sulphonamide carbonic anhydrase inhibitors and the toxicity of a series of benzyl alcohols. A partial least square analysis in conjunction with a genetic algorithm delivered excellent models. They are also able to highlight the active site, of the ligand or the molecule whose structure determines the activity. The advantages and limitations of QTMS are discussed.

  2. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert

    2015-02-19

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  3. Visual reconciliation of alternative similarity spaces in climate modeling

    Science.gov (United States)

    J Poco; A Dasgupta; Y Wei; William Hargrove; C.R. Schwalm; D.N. Huntzinger; R Cook; E Bertini; C.T. Silva

    2015-01-01

    Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses...

  4. Self-similar solution for coupled thermal electromagnetic model ...

    African Journals Online (AJOL)

    An investigation into the existence and uniqueness solution of self-similar solution for the coupled Maxwell and Pennes Bio-heat equations have been done. Criteria for existence and uniqueness of self-similar solution are revealed in the consequent theorems. Journal of the Nigerian Association of Mathematical Physics ...

  5. An approach to large scale identification of non-obvious structural similarities between proteins

    Science.gov (United States)

    Cherkasov, Artem; Jones, Steven JM

    2004-01-01

    Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence. PMID:15147578

  6. An approach to large scale identification of non-obvious structural similarities between proteins

    Directory of Open Access Journals (Sweden)

    Cherkasov Artem

    2004-05-01

    Full Text Available Abstract Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence.

  7. Novel Agent Based-approach for Industrial Diagnosis: A Combined use Between Case-based Reasoning and Similarity Measure

    Directory of Open Access Journals (Sweden)

    Fatima Zohra Benkaddour

    2016-12-01

    Full Text Available In spunlace nonwovens industry, the maintenance task is very complex, it requires experts and operators collaboration. In this paper, we propose a new approach integrating an agent- based modelling with case-based reasoning that utilizes similarity measures and preferences module. The main purpose of our study is to compare and evaluate the most suitable similarity measure for our case. Furthermore, operators that are usually geographically dispersed, have to collaborate and negotiate to achieve mutual agreements, especially when their proposals (diagnosis lead to a conflicting situation. The experimentation shows that the suggested agent-based approach is very interesting and efficient for operators and experts who collaborate in INOTIS enterprise.

  8. Modeling Recognition Memory Using the Similarity Structure of Natural Input

    Science.gov (United States)

    Lacroix, Joyca P. W.; Murre, Jaap M. J.; Postma, Eric O.; van den Herik, H. Jaap

    2006-01-01

    The natural input memory (NAM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During recognition, the model compares incoming preprocessed…

  9. Mathematical approach for the assessment of similarity factor using a new scheme for calculating weight.

    Science.gov (United States)

    Gohel, M C; Sarvaiya, K G; Shah, A R; Brahmbhatt, B K

    2009-03-01

    The objective of the present work was to propose a method for calculating weight in the Moore and Flanner Equation. The percentage coefficient of variation in reference and test formulations at each time point was considered for calculating weight. The literature reported data are used to demonstrate applicability of the method. The advantages and applications of new approach are narrated. The results show a drop in the value of similarity factor as compared to the approach proposed in earlier work. The scientists who need high accuracy in calculation may use this approach.

  10. Modeling recognition memory using the similarity structure of natural input

    NARCIS (Netherlands)

    Lacroix, J.P.W.; Murre, J.M.J.; Postma, E.O.; van den Herik, H.J.

    2006-01-01

    The natural input memory (NIM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During

  11. Approximate self-similarity in models of geological folding

    NARCIS (Netherlands)

    Budd, C.J.; Peletier, M.A.

    2000-01-01

    We propose a model for the folding of rock under the compression of tectonic plates. This models an elastic rock layer imbedded in a viscous foundation by a fourth-order parabolic equation with a nonlinear constraint. The large-time behavior of solutions of this problem is examined and found to be

  12. The empirical versus DSM-oriented approach of the child behavior checklist: Similarities and dissimilarities

    NARCIS (Netherlands)

    Wolff, M.S. de; Vogels, A.G.C.; Reijneveld, S.A.

    2014-01-01

    The DSM-oriented approach of the Child Behavior Checklist (CBCL) is a relatively new classification of problem behavior in children and adolescents. Given the clinical and scientific relevance of the CBCL, this study examines similarities and dissimilarities between the empirical and the

  13. The Empirical Versus DSM-Oriented Approach of the Child Behavior Checklist Similarities and Dissimilarities

    NARCIS (Netherlands)

    de Wolff, Marianne S.; Vogels, Anton G. C.; Reijneveld, Sijmen A.

    2014-01-01

    The DSM-oriented approach of the Child Behavior Checklist (CBCL) is a relatively new classification of problem behavior in children and adolescents. Given the clinical and scientific relevance of the CBCL, this study examines similarities and dissimilarities between the empirical and the

  14. Detecting atypical examples of known domain types by sequence similarity searching: the SBASE domain library approach.

    Science.gov (United States)

    Dhir, Somdutta; Pacurar, Mircea; Franklin, Dino; Gáspári, Zoltán; Kertész-Farkas, Attila; Kocsor, András; Eisenhaber, Frank; Pongor, Sándor

    2010-11-01

    SBASE is a project initiated to detect known domain types and predicting domain architectures using sequence similarity searching (Simon et al., Protein Seq Data Anal, 5: 39-42, 1992, Pongor et al, Nucl. Acids. Res. 21:3111-3115, 1992). The current approach uses a curated collection of domain sequences - the SBASE domain library - and standard similarity search algorithms, followed by postprocessing which is based on a simple statistics of the domain similarity network (http://hydra.icgeb.trieste.it/sbase/). It is especially useful in detecting rare, atypical examples of known domain types which are sometimes missed even by more sophisticated methodologies. This approach does not require multiple alignment or machine learning techniques, and can be a useful complement to other domain detection methodologies. This article gives an overview of the project history as well as of the concepts and principles developed within this the project.

  15. A low-cost approach to electronic excitation energies based on the driven similarity renormalization group

    Science.gov (United States)

    Li, Chenyang; Verma, Prakash; Hannon, Kevin P.; Evangelista, Francesco A.

    2017-08-01

    We propose an economical state-specific approach to evaluate electronic excitation energies based on the driven similarity renormalization group truncated to second order (DSRG-PT2). Starting from a closed-shell Hartree-Fock wave function, a model space is constructed that includes all single or single and double excitations within a given set of active orbitals. The resulting VCIS-DSRG-PT2 and VCISD-DSRG-PT2 methods are introduced and benchmarked on a set of 28 organic molecules [M. Schreiber et al., J. Chem. Phys. 128, 134110 (2008)]. Taking CC3 results as reference values, mean absolute deviations of 0.32 and 0.22 eV are observed for VCIS-DSRG-PT2 and VCISD-DSRG-PT2 excitation energies, respectively. Overall, VCIS-DSRG-PT2 yields results with accuracy comparable to those from time-dependent density functional theory using the B3LYP functional, while VCISD-DSRG-PT2 gives excitation energies comparable to those from equation-of-motion coupled cluster with singles and doubles.

  16. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    Science.gov (United States)

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    Background Physicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions.. Objective The aim is to summarize and review published studies describing computer-based approaches for predicting patients’ future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research. Methods The method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results. Results After duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods. Conclusions Interest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health

  17. Similarity solutions for systems arising from an Aedes aegypti model

    Science.gov (United States)

    Freire, Igor Leite; Torrisi, Mariano

    2014-04-01

    In a recent paper a new model for the Aedes aegypti mosquito dispersal dynamics was proposed and its Lie point symmetries were investigated. According to the carried group classification, the maximal symmetry Lie algebra of the nonlinear cases is reached whenever the advection term vanishes. In this work we analyze the family of systems obtained when the wind effects on the proposed model are neglected. Wide new classes of solutions to the systems under consideration are obtained.

  18. Genome-Wide Expression Profiling of Five Mouse Models Identifies Similarities and Differences with Human Psoriasis

    Science.gov (United States)

    Swindell, William R.; Johnston, Andrew; Carbajal, Steve; Han, Gangwen; Wohn, Christian; Lu, Jun; Xing, Xianying; Nair, Rajan P.; Voorhees, John J.; Elder, James T.; Wang, Xiao-Jing; Sano, Shigetoshi; Prens, Errol P.; DiGiovanni, John; Pittelkow, Mark R.; Ward, Nicole L.; Gudjonsson, Johann E.

    2011-01-01

    Development of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features of the human disease, and standardized validation criteria for psoriasis mouse models have not been widely applied. In this study, whole-genome transcriptional profiling is used to compare gene expression patterns manifested by human psoriatic skin lesions with those that occur in five psoriasis mouse models (K5-Tie2, imiquimod, K14-AREG, K5-Stat3C and K5-TGFbeta1). While the cutaneous gene expression profiles associated with each mouse phenotype exhibited statistically significant similarity to the expression profile of psoriasis in humans, each model displayed distinctive sets of similarities and differences in comparison to human psoriasis. For all five models, correspondence to the human disease was strong with respect to genes involved in epidermal development and keratinization. Immune and inflammation-associated gene expression, in contrast, was more variable between models as compared to the human disease. These findings support the value of all five models as research tools, each with identifiable areas of convergence to and divergence from the human disease. Additionally, the approach used in this paper provides an objective and quantitative method for evaluation of proposed mouse models of psoriasis, which can be strategically applied in future studies to score strengths of mouse phenotypes relative to specific aspects of human psoriasis. PMID:21483750

  19. Regulatory challenges and approaches to characterize nanomedicines and their follow-on similars.

    Science.gov (United States)

    Mühlebach, Stefan; Borchard, Gerrit; Yildiz, Selcan

    2015-03-01

    Nanomedicines are highly complex products and are the result of difficult to control manufacturing processes. Nonbiological complex drugs and their biological counterparts can comprise nanoparticles and therefore show nanomedicine characteristics. They consist of not fully known nonhomomolecular structures, and can therefore not be characterized by physicochemical means only. Also, intended copies of nanomedicines (follow-on similars) may have clinically meaningful differences, creating the regulatory challenge of how to grant a high degree of assurance for patients' benefit and safety. As an example, the current regulatory approach for marketing authorization of intended copies of nonbiological complex drugs appears inappropriate; also, a valid strategy incorporating the complexity of such systems is undefined. To demonstrate sufficient similarity and comparability, a stepwise quality, nonclinical and clinical approach is necessary to obtain market authorization for follow-on products as therapeutic alternatives, substitution and/or interchangeable products. To fill the regulatory gap, harmonized and science-based standards are needed.

  20. Quality assessment of protein model-structures based on structural and functional similarities.

    Science.gov (United States)

    Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata

    2012-09-21

    Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and

  1. HEDR modeling approach

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1992-07-01

    This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered

  2. Meeting your match: How attractiveness similarity affects approach behavior in mixed-sex dyads

    OpenAIRE

    van Straaten, I.; Engels, R.C.M.E.; Finkenauer, C.; Holland, R.W.

    2009-01-01

    This experimental study investigated approach behavior toward opposite-sex others of similar versus dissimilar physical attractiveness. Furthermore, it tested the moderating effects of sex. Single participants interacted with confederates of high and low attractiveness. Observers rated their behavior in terms of relational investment (i.e., behavioral efforts related to the improvement of interaction fluency, communication of positive interpersonal affect, and positive self-presentation). As ...

  3. Meeting your match: how attractiveness similarity affects approach behavior in mixed-sex dyads.

    Science.gov (United States)

    van Straaten, Ischa; Engels, Rutger C M E; Finkenauer, Catrin; Holland, Rob W

    2009-06-01

    This experimental study investigated approach behavior toward opposite-sex others of similar versus dissimilar physical attractiveness. Furthermore, it tested the moderating effects of sex. Single participants interacted with confederates of high and low attractiveness. Observers rated their behavior in terms of relational investment (i.e., behavioral efforts related to the improvement of interaction fluency, communication of positive interpersonal affect, and positive self-presentation). As expected, men displayed more relational investment behavior if their own physical attractiveness was similar to that of the confederate. For women, no effects of attractiveness similarity on relational investment behavior were found. Results are discussed in the light of positive assortative mating, preferences for physically attractive mates, and sex differences in attraction-related interpersonal behaviors.

  4. MAPPING THE SIMILARITIES OF SPECTRA: GLOBAL AND LOCALLY-BIASED APPROACHES TO SDSS GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Lawlor, David [Statistical and Applied Mathematical Sciences Institute (United States); Budavári, Tamás [Dept. of Applied Mathematics and Statistics, The Johns Hopkins University (United States); Mahoney, Michael W. [International Computer Science Institute (United States)

    2016-12-10

    We present a novel approach to studying the diversity of galaxies. It is based on a novel spectral graph technique, that of locally-biased semi-supervised eigenvectors . Our method introduces new coordinates that summarize an entire spectrum, similar to but going well beyond the widely used Principal Component Analysis (PCA). Unlike PCA, however, this technique does not assume that the Euclidean distance between galaxy spectra is a good global measure of similarity. Instead, we relax that condition to only the most similar spectra, and we show that doing so yields more reliable results for many astronomical questions of interest. The global variant of our approach can identify very finely numerous astronomical phenomena of interest. The locally-biased variants of our basic approach enable us to explore subtle trends around a set of chosen objects. The power of the method is demonstrated in the Sloan Digital Sky Survey Main Galaxy Sample, by illustrating that the derived spectral coordinates carry an unprecedented amount of information.

  5. Mapping the Similarities of Spectra: Global and Locally-biased Approaches to SDSS Galaxies

    Science.gov (United States)

    Lawlor, David; Budavári, Tamás; Mahoney, Michael W.

    2016-12-01

    We present a novel approach to studying the diversity of galaxies. It is based on a novel spectral graph technique, that of locally-biased semi-supervised eigenvectors. Our method introduces new coordinates that summarize an entire spectrum, similar to but going well beyond the widely used Principal Component Analysis (PCA). Unlike PCA, however, this technique does not assume that the Euclidean distance between galaxy spectra is a good global measure of similarity. Instead, we relax that condition to only the most similar spectra, and we show that doing so yields more reliable results for many astronomical questions of interest. The global variant of our approach can identify very finely numerous astronomical phenomena of interest. The locally-biased variants of our basic approach enable us to explore subtle trends around a set of chosen objects. The power of the method is demonstrated in the Sloan Digital Sky Survey Main Galaxy Sample, by illustrating that the derived spectral coordinates carry an unprecedented amount of information.

  6. MAPPING THE SIMILARITIES OF SPECTRA: GLOBAL AND LOCALLY-BIASED APPROACHES TO SDSS GALAXIES

    International Nuclear Information System (INIS)

    Lawlor, David; Budavári, Tamás; Mahoney, Michael W.

    2016-01-01

    We present a novel approach to studying the diversity of galaxies. It is based on a novel spectral graph technique, that of locally-biased semi-supervised eigenvectors . Our method introduces new coordinates that summarize an entire spectrum, similar to but going well beyond the widely used Principal Component Analysis (PCA). Unlike PCA, however, this technique does not assume that the Euclidean distance between galaxy spectra is a good global measure of similarity. Instead, we relax that condition to only the most similar spectra, and we show that doing so yields more reliable results for many astronomical questions of interest. The global variant of our approach can identify very finely numerous astronomical phenomena of interest. The locally-biased variants of our basic approach enable us to explore subtle trends around a set of chosen objects. The power of the method is demonstrated in the Sloan Digital Sky Survey Main Galaxy Sample, by illustrating that the derived spectral coordinates carry an unprecedented amount of information.

  7. An Efficient Similarity Digests Database Lookup - A Logarithmic Divide & Conquer Approach

    Directory of Open Access Journals (Sweden)

    Frank Breitinger

    2014-09-01

    Full Text Available Investigating seized devices within digital forensics represents a challenging task due to the increasing amount of data. Common procedures utilize automated file identification, which reduces the amount of data an investigator has to examine manually. In the past years the research field of approximate matching arises to detect similar data. However, if n denotes the number of similarity digests in a database, then the lookup for a single similarity digest is of complexity of O(n. This paper presents a concept to extend existing approximate matching algorithms, which reduces the lookup complexity from O(n to O(log(n. Our proposed approach is based on the well-known divide and conquer paradigm and builds a Bloom filter-based tree data structure in order to enable an efficient lookup of similarity digests. Further, it is demonstrated that the presented technique is highly scalable operating a trade-off between storage requirements and computational efficiency. We perform a theoretical assessment based on recently published results and reasonable magnitudes of input data, and show that the complexity reduction achieved by the proposed technique yields a 220-fold acceleration of look-up costs.

  8. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  9. Approaches to long-term conditions management and care for older people: similarities or differences?

    Science.gov (United States)

    Tullett, Michael; Neno, Rebecca

    2008-03-01

    In the past few years, there has been an increased emphasis both on the care for older people and the management of long-term conditions within the United Kingdom. Currently, the Department of Health and the Scottish Executive identify and manage these two areas as separate entities. The aim of this article is to examine the current approaches to both of these areas of care and identify commonalities and articulate differences. The population across the world and particularly within the United Kingdom is ageing at an unprecedented rate. The numbers suffering long-term illness conditions has also risen sharply in recent years. As such, nurses need to be engaged at a strategic level in the design of robust and appropriate services for this increasing population group. A comprehensive literature review on long-term conditions and the care of older people was undertaken in an attempt to identify commonalities and differences in strategic and organizational approaches. A policy analysis was conducted to support the paper and establish links that may inform local service development. Proposing service development based on identified needs rather than organizational boundaries after the establishment of clear links between health and social care for those with long-term conditions and the ageing population. Nurse Managers need to be aware of the similarities and differences in political and theoretical approaches to the care for older people and the management of long-term conditions. By adopting this view, creativity in the service redesign and service provision can be fostered and nurtured as well as achieving a renewed focus on partnership working across organizational boundaries. With the current renewed political focus on health and social care, there is an opportunity in the UK to redefine the structure of care. This paper proposes similarities between caring for older people and for those with long-term conditions, and it is proposed these encapsulate the wider

  10. Training of Tonal Similarity Ratings in Non-Musicians: A “Rapid Learning” Approach

    Science.gov (United States)

    Oechslin, Mathias S.; Läge, Damian; Vitouch, Oliver

    2012-01-01

    Although cognitive music psychology has a long tradition of expert–novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based “rapid learning” paradigm, participants had to decide for single tones and chords whether paired sounds matched each other well. Before and after the training sessions, they provided similarity judgments for a complete set of sound pairs. From these similarity matrices, individual relational sound maps, intended to display mental representations, were calculated by means of non-metric multidimensional scaling (NMDS), and were compared to an expert model through procrustean transformation. Approximately half of the novices showed substantial learning success, with some participants even reaching the level of professional musicians. Results speak for a fundamental ability to quickly train an understanding of harmony, show inter-individual differences in learning success, and demonstrate the suitability of the scaling method used for learning research in music and other domains. Results are discussed in the context of the “giftedness” debate. PMID:22629252

  11. Training of tonal similarity ratings in non-musicians: a "rapid learning" approach.

    Science.gov (United States)

    Oechslin, Mathias S; Läge, Damian; Vitouch, Oliver

    2012-01-01

    Although cognitive music psychology has a long tradition of expert-novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based "rapid learning" paradigm, participants had to decide for single tones and chords whether paired sounds matched each other well. Before and after the training sessions, they provided similarity judgments for a complete set of sound pairs. From these similarity matrices, individual relational sound maps, intended to display mental representations, were calculated by means of non-metric multidimensional scaling (NMDS), and were compared to an expert model through procrustean transformation. Approximately half of the novices showed substantial learning success, with some participants even reaching the level of professional musicians. Results speak for a fundamental ability to quickly train an understanding of harmony, show inter-individual differences in learning success, and demonstrate the suitability of the scaling method used for learning research in music and other domains. Results are discussed in the context of the "giftedness" debate.

  12. Training of tonal similarity ratings in non-musicians: a rapid learning approach

    Directory of Open Access Journals (Sweden)

    Mathias S Oechslin

    2012-05-01

    Full Text Available Although music psychology has a long tradition of expert-novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based rapid learning paradigm, participants had to decide for single tones and chords whether paired sounds matched each other well. Before and after the training sessions, they provided similarity judgments for a complete set of sound pairs. From these similarity matrices, individual relational sound maps, aiming to map the mental representations, were calculated by means of non-metric multidimensional scaling (NMDS, which were compared to an expert model through procrustean transformation. Approximately half of the novices showed substantial learning success, with some participants even reaching the level of professional musicians. Results speak for a fundamental ability to quickly train an understanding of harmony, show inter-individual differences in learning success, and demonstrate the suitability of the scaling method used for music psychological research. Results are discussed in the context of the giftedness debate.

  13. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  14. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  15. How similar are nut-cracking and stone-flaking? A functional approach to percussive technology.

    Science.gov (United States)

    Bril, Blandine; Parry, Ross; Dietrich, Gilles

    2015-11-19

    Various authors have suggested similarities between tool use in early hominins and chimpanzees. This has been particularly evident in studies of nut-cracking which is considered to be the most complex skill exhibited by wild apes, and has also been interpreted as a precursor of more complex stone-flaking abilities. It has been argued that there is no major qualitative difference between what the chimpanzee does when he cracks a nut and what early hominins did when they detached a flake from a core. In this paper, similarities and differences between skills involved in stone-flaking and nut-cracking are explored through an experimental protocol with human subjects performing both tasks. We suggest that a 'functional' approach to percussive action, based on the distinction between functional parameters that characterize each task and parameters that characterize the agent's actions and movements, is a fruitful method for understanding those constraints which need to be mastered to perform each task successfully, and subsequently, the nature of skill involved in both tasks. © 2015 The Author(s).

  16. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li

    2011-01-01

    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  17. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  18. Uncovering highly obfuscated plagiarism cases using fuzzy semantic-based similarity model

    Directory of Open Access Journals (Sweden)

    Salha M. Alzahrani

    2015-07-01

    Full Text Available Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and suspicious texts of short lengths, which implement the semantic relatedness between words as a membership function to a fuzzy set. In order to minimize the number of false positives and false negatives, a learning method that combines a permission threshold and a variation threshold is used to decide true plagiarism cases. The proposed model and the baselines are evaluated on 99,033 ground-truth annotated cases extracted from different datasets, including 11,621 (11.7% handmade paraphrases, 54,815 (55.4% artificial plagiarism cases, and 32,578 (32.9% plagiarism-free cases. We conduct extensive experimental verifications, including the study of the effects of different segmentations schemes and parameter settings. Results are assessed using precision, recall, F-measure and granularity on stratified 10-fold cross-validation data. The statistical analysis using paired t-tests shows that the proposed approach is statistically significant in comparison with the baselines, which demonstrates the competence of fuzzy semantic-based model to detect plagiarism cases beyond the literal plagiarism. Additionally, the analysis of variance (ANOVA statistical test shows the effectiveness of different segmentation schemes used with the proposed approach.

  19. Matrix approach to the Shapley value and dual similar associated consistency

    NARCIS (Netherlands)

    Xu, G.; Driessen, Theo

    Replacing associated consistency in Hamiache's axiom system by dual similar associated consistency, we axiomatize the Shapley value as the unique value verifying the inessential game property, continuity and dual similar associated consistency. Continuing the matrix analysis for Hamiache's

  20. Similarity transformed coupled cluster response (ST-CCR) theory--a time-dependent similarity transformed equation-of-motion coupled cluster (STEOM-CC) approach.

    Science.gov (United States)

    Landau, Arie

    2013-07-07

    This paper presents a new method for calculating spectroscopic properties in the framework of response theory utilizing a sequence of similarity transformations (STs). The STs are preformed using the coupled cluster (CC) and Fock-space coupled cluster operators. The linear and quadratic response functions of the new similarity transformed CC response (ST-CCR) method are derived. The poles of the linear response yield excitation-energy (EE) expressions identical to the ones in the similarity transformed equation-of-motion coupled cluster (STEOM-CC) approach. ST-CCR and STEOM-CC complement each other, in analogy to the complementarity of CC response (CCR) and equation-of-motion coupled cluster (EOM-CC). ST-CCR/STEOM-CC and CCR/EOM-CC yield size-extensive and size-intensive EEs, respectively. Other electronic-properties, e.g., transition dipole strengths, are also size-extensive within ST-CCR, in contrast to STEOM-CC. Moreover, analysis suggests that in comparison with CCR, the ST-CCR expressions may be confined to a smaller subspace, however, the precise scope of the truncation can only be determined numerically. In addition, reformulation of the time-independent STEOM-CC using the same parameterization as in ST-CCR, as well as an efficient truncation scheme, is presented. The shown convergence of the time-dependent and time-independent expressions displays the completeness of the presented formalism.

  1. Similarity Assessment of Land Surface Model Outputs in the North American Land Data Assimilation System

    Science.gov (United States)

    Kumar, Sujay V.; Wang, Shugong; Mocko, David M.; Peters-Lidard, Christa D.; Xia, Youlong

    2017-11-01

    Multimodel ensembles are often used to produce ensemble mean estimates that tend to have increased simulation skill over any individual model output. If multimodel outputs are too similar, an individual LSM would add little additional information to the multimodel ensemble, whereas if the models are too dissimilar, it may be indicative of systematic errors in their formulations or configurations. The article presents a formal similarity assessment of the North American Land Data Assimilation System (NLDAS) multimodel ensemble outputs to assess their utility to the ensemble, using a confirmatory factor analysis. Outputs from four NLDAS Phase 2 models currently running in operations at NOAA/NCEP and four new/upgraded models that are under consideration for the next phase of NLDAS are employed in this study. The results show that the runoff estimates from the LSMs were most dissimilar whereas the models showed greater similarity for root zone soil moisture, snow water equivalent, and terrestrial water storage. Generally, the NLDAS operational models showed weaker association with the common factor of the ensemble and the newer versions of the LSMs showed stronger association with the common factor, with the model similarity increasing at longer time scales. Trade-offs between the similarity metrics and accuracy measures indicated that the NLDAS operational models demonstrate a larger span in the similarity-accuracy space compared to the new LSMs. The results of the article indicate that simultaneous consideration of model similarity and accuracy at the relevant time scales is necessary in the development of multimodel ensemble.

  2. A new k-epsilon model consistent with Monin-Obukhov similarity theory

    DEFF Research Database (Denmark)

    van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.

    2017-01-01

    A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...

  3. Material Modelling - Composite Approach

    DEFF Research Database (Denmark)

    Nielsen, Lauge Fuglsang

    1997-01-01

    is successfully justified comparing predicted results with experimental data obtained in the HETEK-project on creep, relaxation, and shrinkage of very young concretes cured at a temperature of T = 20^o C and a relative humidity of RH = 100%. The model is also justified comparing predicted creep, shrinkage......, and internal stresses caused by drying shrinkage with experimental results reported in the literature on the mechanical behavior of mature concretes. It is then concluded that the model presented applied in general with respect to age at loading.From a stress analysis point of view the most important finding...... in this report is that cement paste and concrete behave practically as linear-viscoelastic materials from an age of approximately 10 hours. This is a significant age extension relative to earlier studies in the literature where linear-viscoelastic behavior is only demonstrated from ages of a few days. Thus...

  4. Clarkson-Kruskal Direct Similarity Approach for Differential-Difference Equations

    Institute of Scientific and Technical Information of China (English)

    SHEN Shou-Feng

    2005-01-01

    In this letter, the Clarkson-Kruskal direct method is extended to similarity reduce some differentialdifference equations. As examples, the differential-difference KZ equation and KP equation are considered.

  5. Towards predictive resistance models for agrochemicals by combining chemical and protein similarity via proteochemometric modelling.

    Science.gov (United States)

    van Westen, Gerard J P; Bender, Andreas; Overington, John P

    2014-10-01

    Resistance to pesticides is an increasing problem in agriculture. Despite practices such as phased use and cycling of 'orthogonally resistant' agents, resistance remains a major risk to national and global food security. To combat this problem, there is a need for both new approaches for pesticide design, as well as for novel chemical entities themselves. As summarized in this opinion article, a technique termed 'proteochemometric modelling' (PCM), from the field of chemoinformatics, could aid in the quantification and prediction of resistance that acts via point mutations in the target proteins of an agent. The technique combines information from both the chemical and biological domain to generate bioactivity models across large numbers of ligands as well as protein targets. PCM has previously been validated in prospective, experimental work in the medicinal chemistry area, and it draws on the growing amount of bioactivity information available in the public domain. Here, two potential applications of proteochemometric modelling to agrochemical data are described, based on previously published examples from the medicinal chemistry literature.

  6. A new modelling approach for zooplankton behaviour

    Science.gov (United States)

    Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.

    We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.

  7. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  8. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    Science.gov (United States)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  9. Idiosyncratic versus social consensus approaches to personality : Self-view, perceived, and peer-view similarity

    NARCIS (Netherlands)

    van Zalk, M.H.W.; Denissen, J.J.A.

    2015-01-01

    In the current studies, the authors examined how peers influence friendship choices through individuals' perceptions of similarity between their own and others' Big Five traits. Self-reported and peer-reported data were gathered from 3 independent samples using longitudinal round-robin designs.

  10. Similarities between several major extinctions and preservation of life - biomolecules to geomolecules: an interdisciplinary approach

    Science.gov (United States)

    Grice, Kliti; Melendez, Ines; Tulipani, Svenja

    2015-04-01

    WA Organic and Isotope Geochemistry Centre, The Institute for Geoscience Research, Department of Chemistry, Curtin University, GPO Box U1987 Perth, WA 6845, Australia Photic zone euxinia in ancient seas has proven signficant for elucidating biogeochemical changes that occurred during three of the five Phanerozoic mass extinctions, viz. the Permian/Triassic [1], Triassic/Jurassic [2] and Late Givetian (Devonian) [3] events, including the conditions associated with exceptional fossil preservation [4,5]. The series of events preceding, during and post the Triassic/Jurassic event, is remarkably similar to that reported for the Permian/Triassic extinction, the largest of the Phanerozoic Era. For the Late Givetian event, the first forests evolved and reef-building communities and associated fauna in tropical, marine settings were largely affected [6]. Sedimentary rocks on the margins of the Devonian reef slope in the Canning Basin, WA, contain novel biomarker, isotopic and palynological evidence for the existence of a persistently stratified water-column (comprising a freshwater lens overlying a more saline hypolimnion), with prevailing anoxia and PZE [7]. Also from the Canning Basin, the exceptional preservation of a suite of biomarkers in a Devonian invertebrate fossil within a carbonate concretion supports rapid encasement of the crustacean (identified by % of C27 steroids) enhanced by sulfate reducing bacteria under PZE conditions. PZE plays a critical role in fossil (including soft tissue) and biomarker preservation. In the same sample, the oldest occurrence of intact sterols shows that they have been preserved for ca. 380 Ma [5]. The exceptional preservation of this biomass is attributed to microbially induced carbonate encapsulation, preventing full decomposition and transformation, thus extending the record of sterol occurrences in the geosphere by 250 Ma. A suite of ca. 50 diagenetic transformation products of sterols is also reported, showing the unique

  11. Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach.

    Science.gov (United States)

    Pasupa, Kitsuchart; Kudisthalert, Wasu

    2018-01-01

    Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets-Maximum Unbiased Validation Dataset-which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6.

  12. Integration of Phenotypic Metadata and Protein Similarity in Archaea Using a Spectral Bipartitioning Approach

    Energy Technology Data Exchange (ETDEWEB)

    Hooper, Sean D.; Anderson, Iain J; Pati, Amrita; Dalevi, Daniel; Mavromatis, Konstantinos; Kyrpides, Nikos C

    2009-01-01

    In order to simplify and meaningfully categorize large sets of protein sequence data, it is commonplace to cluster proteins based on the similarity of those sequences. However, it quickly becomes clear that the sequence flexibility allowed a given protein varies significantly among different protein families. The degree to which sequences are conserved not only differs for each protein family, but also is affected by the phylogenetic divergence of the source organisms. Clustering techniques that use similarity thresholds for protein families do not always allow for these variations and thus cannot be confidently used for applications such as automated annotation and phylogenetic profiling. In this work, we applied a spectral bipartitioning technique to all proteins from 53 archaeal genomes. Comparisons between different taxonomic levels allowed us to study the effects of phylogenetic distances on cluster structure. Likewise, by associating functional annotations and phenotypic metadata with each protein, we could compare our protein similarity clusters with both protein function and associated phenotype. Our clusters can be analyzed graphically and interactively online.

  13. Dynamics based alignment of proteins: an alternative approach to quantify dynamic similarity

    Directory of Open Access Journals (Sweden)

    Lyngsø Rune

    2010-04-01

    Full Text Available Abstract Background The dynamic motions of many proteins are central to their function. It therefore follows that the dynamic requirements of a protein are evolutionary constrained. In order to assess and quantify this, one needs to compare the dynamic motions of different proteins. Comparing the dynamics of distinct proteins may also provide insight into how protein motions are modified by variations in sequence and, consequently, by structure. The optimal way of comparing complex molecular motions is, however, far from trivial. The majority of comparative molecular dynamics studies performed to date relied upon prior sequence or structural alignment to define which residues were equivalent in 3-dimensional space. Results Here we discuss an alternative methodology for comparative molecular dynamics that does not require any prior alignment information. We show it is possible to align proteins based solely on their dynamics and that we can use these dynamics-based alignments to quantify the dynamic similarity of proteins. Our method was tested on 10 representative members of the PDZ domain family. Conclusions As a result of creating pair-wise dynamics-based alignments of PDZ domains, we have found evolutionarily conserved patterns in their backbone dynamics. The dynamic similarity of PDZ domains is highly correlated with their structural similarity as calculated with Dali. However, significant differences in their dynamics can be detected indicating that sequence has a more refined role to play in protein dynamics than just dictating the overall fold. We suggest that the method should be generally applicable.

  14. The predictive model on the user reaction time using the information similarity

    International Nuclear Information System (INIS)

    Lee, Sung Jin; Heo, Gyun Young; Chang, Soon Heung

    2005-01-01

    Human performance is frequently degraded because people forget. Memory is one of brain processes that are important when trying to understand how people process information. Although a large number of studies have been made on the human performance, little is known about the similarity effect in human performance. The purpose of this paper is to propose and validate the quantitative and predictive model on the human response time in the user interface with the concept of similarity. However, it is not easy to explain the human performance with only similarity or information amount. We are confronted by two difficulties: making the quantitative model on the human response time with the similarity and validating the proposed model by experimental work. We made the quantitative model based on the Hick's law and the law of practice. In addition, we validated the model with various experimental conditions by measuring participants' response time in the environment of computer-based display. Experimental results reveal that the human performance is improved by the user interface's similarity. We think that the proposed model is useful for the user interface design and evaluation phases

  15. Are Approaches to Learning in Kindergarten Associated with Academic and Social Competence Similarly?

    Science.gov (United States)

    Razza, Rachel A.; Martin, Anne; Brooks-Gunn, Jeanne

    2015-01-01

    Background: Approaches to learning (ATL) is a key domain of school readiness with important implications for children's academic trajectories. Interestingly, however, the impact of early ATL on children's social competence has not been examined. Objective: This study examines associations between children's ATL at age 5 and academic achievement…

  16. Spatiao – Temporal Evaluation and Comparison of MM5 Model using Similarity Algorithm

    Directory of Open Access Journals (Sweden)

    N. Siabi

    2016-02-01

    Full Text Available Introduction temporal and spatial change of meteorological and environmental variables is very important. These changes can be predicted by numerical prediction models over time and in different locations and can be provided as spatial zoning maps with interpolation methods such as geostatistics (16, 6. But these maps are comparable to each other as visual, qualitative and univariate for a limited number of maps (15. To resolve this problem the similarity algorithm is used. This algorithm is a simultaneous comparison method to a large number of data (18. Numerical prediction models such as MM5 were used in different studies (10, 22, and 23. But a little research is done to compare the spatio-temporal similarity of the models with real data quantitatively. The purpose of this paper is to integrate geostatistical techniques with similarity algorithm to study the spatial and temporal MM5 model predicted results with real data. Materials and Methods The study area is north east of Iran. 55 to 61 degrees of longitude and latitude is 30 to 38 degrees. Monthly and annual temperature and precipitation actual data for the period of 1990-2010 was received from the Meteorological Agency and Department of Energy. MM5 Model Data, with a spatial resolution 0.5 × 0.5 degree were downloaded from the NASA website (5. GS+ and ArcGis software were used to produce each variable map. We used multivariate methods co-kriging and kriging with an external drift by applying topography and height as a secondary variable via implementing Digital Elevation Model. (6,12,14. Then the standardize and similarity algorithms (9,11 was applied by programming in MATLAB software to each map grid point. The spatial and temporal similarities between data collections and model results were obtained by F values. These values are between 0 and 0.5 where the value below 0.2 indicates good similarity and above 0.5 shows very poor similarity. The results were plotted on maps by MATLAB

  17. Molecular basis sets - a general similarity-based approach for representing chemical spaces.

    Science.gov (United States)

    Raghavendra, Akshay S; Maggiora, Gerald M

    2007-01-01

    A new method, based on generalized Fourier analysis, is described that utilizes the concept of "molecular basis sets" to represent chemical space within an abstract vector space. The basis vectors in this space are abstract molecular vectors. Inner products among the basis vectors are determined using an ansatz that associates molecular similarities between pairs of molecules with their corresponding inner products. Moreover, the fact that similarities between pairs of molecules are, in essentially all cases, nonzero implies that the abstract molecular basis vectors are nonorthogonal, but since the similarity of a molecule with itself is unity, the molecular vectors are normalized to unity. A symmetric orthogonalization procedure, which optimally preserves the character of the original set of molecular basis vectors, is used to construct appropriate orthonormal basis sets. Molecules can then be represented, in general, by sets of orthonormal "molecule-like" basis vectors within a proper Euclidean vector space. However, the dimension of the space can become quite large. Thus, the work presented here assesses the effect of basis set size on a number of properties including the average squared error and average norm of molecular vectors represented in the space-the results clearly show the expected reduction in average squared error and increase in average norm as the basis set size is increased. Several distance-based statistics are also considered. These include the distribution of distances and their differences with respect to basis sets of differing size and several comparative distance measures such as Spearman rank correlation and Kruscal stress. All of the measures show that, even though the dimension can be high, the chemical spaces they represent, nonetheless, behave in a well-controlled and reasonable manner. Other abstract vector spaces analogous to that described here can also be constructed providing that the appropriate inner products can be directly

  18. A Multi-Objective Approach to Visualize Proportions and Similarities Between Individuals by Rectangular Maps

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero

    In this paper we address the problem of visualizing the proportions and the similarities attached to a set of individuals. We represent this information using a rectangular map, i.e., a subdivision of a rectangle into rectangular portions so that each portion is associated with one individual...... area and adjacency requirements, this visualization problem is formulated as a three-objective Mixed Integer Nonlinear Problem. The first objective seeks to maximize the number of true adjacencies that the rectangular map is able to reproduce, the second one is to minimize the number of false...

  19. Hydrological similarity approach and rainfall satellite utilization for mini hydro power dam basic design (case study on the ungauged catchment at West Borneo, Indonesia)

    Science.gov (United States)

    Prakoso, W. G.; Murtilaksono, K.; Tarigan, S. D.; Purwanto, Y. J.

    2018-05-01

    An approach on flow duration and flood design estimation on the ungauged catchment with no rainfall and discharge data availability was been being develop with hydrological modelling including rainfall run off model implemented with watershed characteristic dataset. Near real time Rainfall data from multi satellite platform e.g. TRMM can be utilized for regionalization approach on the ungauged catchment. Watershed hydrologically similarity analysis were conducted including all of the major watershed in Borneo which was predicted to be similar with the Nanga Raun Watershed. It was found that a satisfactory hydrological model calibration could be achieved using catchment weighted time series of TRMM daily rainfall data, performed on nearby catchment deemed to be sufficiently similar to Nanga Raun catchment in hydrological terms. Based on this calibration, rainfall runoff parameters were then transferred to a model. Relatively reliable flow duration curve and extreme discharge value estimation were produced with reasonable several limitation. Further approach may be performed in order to deal with the primary limitations inherent in the hydrological and statistical analysis, especially to give prolongation to the availability of the rainfall and climate data with some novel approach like downscaling of global climate model.

  20. Models for discrete-time self-similar vector processes with application to network traffic

    Science.gov (United States)

    Lee, Seungsin; Rao, Raghuveer M.; Narasimha, Rajesh

    2003-07-01

    The paper defines self-similarity for vector processes by employing the discrete-time continuous-dilation operation which has successfully been used previously by the authors to define 1-D discrete-time stochastic self-similar processes. To define self-similarity of vector processes, it is required to consider the cross-correlation functions between different 1-D processes as well as the autocorrelation function of each constituent 1-D process in it. System models to synthesize self-similar vector processes are constructed based on the definition. With these systems, it is possible to generate self-similar vector processes from white noise inputs. An important aspect of the proposed models is that they can be used to synthesize various types of self-similar vector processes by choosing proper parameters. Additionally, the paper presents evidence of vector self-similarity in two-channel wireless LAN data and applies the aforementioned systems to simulate the corresponding network traffic traces.

  1. Analysis of metal forming processes by using physical modeling and new plastic similarity condition

    International Nuclear Information System (INIS)

    Gronostajski, Z.; Hawryluk, M.

    2007-01-01

    In recent years many advances have been made in numerical methods, for linear and non-linear problems. However the success of them depends very much on the correctness of the problem formulation and the availability of the input data. Validity of the theoretical results can be verified by an experiment using the real or soft materials. An essential reduction of time and costs of the experiment can be obtained by using soft materials, which behaves in a way analogous to that of real metal during deformation. The advantages of using of the soft materials are closely connected with flow stress 500 to 1000 times lower than real materials. The accuracy of physical modeling depend on the similarity conditions between physical model and real process. The most important similarity conditions are materials similarity in the range of plastic and elastic deformation, geometrical, frictional and thermal similarities. New original plastic similarity condition for physical modeling of metal forming processes is proposed in the paper. It bases on the mathematical description of similarity of the flow stress curves of soft materials and real ones

  2. A Self-Organizing Map-Based Approach to Generating Reduced-Size, Statistically Similar Climate Datasets

    Science.gov (United States)

    Cabell, R.; Delle Monache, L.; Alessandrini, S.; Rodriguez, L.

    2015-12-01

    Climate-based studies require large amounts of data in order to produce accurate and reliable results. Many of these studies have used 30-plus year data sets in order to produce stable and high-quality results, and as a result, many such data sets are available, generally in the form of global reanalyses. While the analysis of these data lead to high-fidelity results, its processing can be very computationally expensive. This computational burden prevents the utilization of these data sets for certain applications, e.g., when rapid response is needed in crisis management and disaster planning scenarios resulting from release of toxic material in the atmosphere. We have developed a methodology to reduce large climate datasets to more manageable sizes while retaining statistically similar results when used to produce ensembles of possible outcomes. We do this by employing a Self-Organizing Map (SOM) algorithm to analyze general patterns of meteorological fields over a regional domain of interest to produce a small set of "typical days" with which to generate the model ensemble. The SOM algorithm takes as input a set of vectors and generates a 2D map of representative vectors deemed most similar to the input set and to each other. Input predictors are selected that are correlated with the model output, which in our case is an Atmospheric Transport and Dispersion (T&D) model that is highly dependent on surface winds and boundary layer depth. To choose a subset of "typical days," each input day is assigned to its closest SOM map node vector and then ranked by distance. Each node vector is treated as a distribution and days are sampled from them by percentile. Using a 30-node SOM, with sampling every 20th percentile, we have been able to reduce 30 years of the Climate Forecast System Reanalysis (CFSR) data for the month of October to 150 "typical days." To estimate the skill of this approach, the "Measure of Effectiveness" (MOE) metric is used to compare area and overlap

  3. Fuzzy Similarity Measures Approach in Benchmarking Taxonomies of Threats against SMEs in Developing Economies

    DEFF Research Database (Denmark)

    Yeboah-Boateng, Ezer Osei

    2013-01-01

    There are various threats that militate against SMEs in developing economies. However, most SMEs fall on the conservative “TV News Effect” of most-publicized cyber-threats or incidences, with disproportionate mitigation measures. This paper endeavors to establish a taxonomy of threat agents to fill...... in the void. Various fuzzy similarity measures based on multi-attribute decision-making techniques have been employed in the evaluation. The taxonomy offers a panoramic view of cyber-threats in assessing mission-critical assets, and serves as a benchmark for initiating appropriate mitigation strategies. SMEs...... in developing economies were strategically interviewed for their expert opinions on various business and security metrics. The study established that natural disasters, which are perennial in most developing economies, are the most critical cyber-threat agent, whilst social engineering is the least critical...

  4. LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation.

    Science.gov (United States)

    Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl

    2015-08-01

    Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average

  5. From epidemics to information propagation: Striking differences in structurally similar adaptive network models

    Science.gov (United States)

    Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet

    2015-09-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.

  6. Experimental Study of Dowel Bar Alternatives Based on Similarity Model Test

    Directory of Open Access Journals (Sweden)

    Chichun Hu

    2017-01-01

    Full Text Available In this study, a small-scaled accelerated loading test based on similarity theory and Accelerated Pavement Analyzer was developed to evaluate dowel bars with different materials and cross-sections. Jointed concrete specimen consisting of one dowel was designed as scaled model for the test, and each specimen was subjected to 864 thousand loading cycles. Deflections between jointed slabs were measured with dial indicators, and strains of the dowel bars were monitored with strain gauges. The load transfer efficiency, differential deflection, and dowel-concrete bearing stress for each case were calculated from these measurements. The test results indicated that the effect of the dowel modulus on load transfer efficiency can be characterized based on the similarity model test developed in the study. Moreover, round steel dowel was found to have similar performance to larger FRP dowel, and elliptical dowel can be preferentially considered in practice.

  7. Environmental niche models for riverine desert fishes and their similarity according to phylogeny and functionality

    Science.gov (United States)

    Whitney, James E.; Whittier, Joanna B.; Paukert, Craig P.

    2017-01-01

    Environmental filtering and competitive exclusion are hypotheses frequently invoked in explaining species' environmental niches (i.e., geographic distributions). A key assumption in both hypotheses is that the functional niche (i.e., species traits) governs the environmental niche, but few studies have rigorously evaluated this assumption. Furthermore, phylogeny could be associated with these hypotheses if it is predictive of functional niche similarity via phylogenetic signal or convergent evolution, or of environmental niche similarity through phylogenetic attraction or repulsion. The objectives of this study were to investigate relationships between environmental niches, functional niches, and phylogenies of fishes of the Upper (UCRB) and Lower (LCRB) Colorado River Basins of southwestern North America. We predicted that functionally similar species would have similar environmental niches (i.e., environmental filtering) and that closely related species would be functionally similar (i.e., phylogenetic signal) and possess similar environmental niches (i.e., phylogenetic attraction). Environmental niches were quantified using environmental niche modeling, and functional similarity was determined using functional trait data. Nonnatives in the UCRB provided the only support for environmental filtering, which resulted from several warmwater nonnatives having dam number as a common predictor of their distributions, whereas several cool- and coldwater nonnatives shared mean annual air temperature as an important distributional predictor. Phylogenetic signal was supported for both natives and nonnatives in both basins. Lastly, phylogenetic attraction was only supported for native fishes in the LCRB and for nonnative fishes in the UCRB. Our results indicated that functional similarity was heavily influenced by evolutionary history, but that phylogenetic relationships and functional traits may not always predict the environmental distribution of species. However, the

  8. Predictive modeling of human perception subjectivity: feasibility study of mammographic lesion similarity

    Science.gov (United States)

    Xu, Songhua; Hudson, Kathleen; Bradley, Yong; Daley, Brian J.; Frederick-Dyer, Katherine; Tourassi, Georgia

    2012-02-01

    The majority of clinical content-based image retrieval (CBIR) studies disregard human perception subjectivity, aiming to duplicate the consensus expert assessment of the visual similarity on example cases. The purpose of our study is twofold: i) discern better the extent of human perception subjectivity when assessing the visual similarity of two images with similar semantic content, and (ii) explore the feasibility of personalized predictive modeling of visual similarity. We conducted a human observer study in which five observers of various expertise were shown ninety-nine triplets of mammographic masses with similar BI-RADS descriptors and were asked to select the two masses with the highest visual relevance. Pairwise agreement ranged between poor and fair among the five observers, as assessed by the kappa statistic. The observers' self-consistency rate was remarkably low, based on repeated questions where either the orientation or the presentation order of a mass was changed. Various machine learning algorithms were explored to determine whether they can predict each observer's personalized selection using textural features. Many algorithms performed with accuracy that exceeded each observer's self-consistency rate, as determined using a cross-validation scheme. This accuracy was statistically significantly higher than would be expected by chance alone (two-tailed p-value ranged between 0.001 and 0.01 for all five personalized models). The study confirmed that human perception subjectivity should be taken into account when developing CBIR-based medical applications.

  9. PubMed-supported clinical term weighting approach for improving inter-patient similarity measure in diagnosis prediction.

    Science.gov (United States)

    Chan, Lawrence Wc; Liu, Ying; Chan, Tao; Law, Helen Kw; Wong, S C Cesar; Yeung, Andy Ph; Lo, K F; Yeung, S W; Kwok, K Y; Chan, William Yl; Lau, Thomas Yh; Shyu, Chi-Ren

    2015-06-02

    Similarity-based retrieval of Electronic Health Records (EHRs) from large clinical information systems provides physicians the evidence support in making diagnoses or referring examinations for the suspected cases. Clinical Terms in EHRs represent high-level conceptual information and the similarity measure established based on these terms reflects the chance of inter-patient disease co-occurrence. The assumption that clinical terms are equally relevant to a disease is unrealistic, reducing the prediction accuracy. Here we propose a term weighting approach supported by PubMed search engine to address this issue. We collected and studied 112 abdominal computed tomography imaging examination reports from four hospitals in Hong Kong. Clinical terms, which are the image findings related to hepatocellular carcinoma (HCC), were extracted from the reports. Through two systematic PubMed search methods, the generic and specific term weightings were established by estimating the conditional probabilities of clinical terms given HCC. Each report was characterized by an ontological feature vector and there were totally 6216 vector pairs. We optimized the modified direction cosine (mDC) with respect to a regularization constant embedded into the feature vector. Equal, generic and specific term weighting approaches were applied to measure the similarity of each pair and their performances for predicting inter-patient co-occurrence of HCC diagnoses were compared by using Receiver Operating Characteristics (ROC) analysis. The Areas under the curves (AUROCs) of similarity scores based on equal, generic and specific term weighting approaches were 0.735, 0.728 and 0.743 respectively (p PubMed. Our findings suggest that the optimized similarity measure with specific term weighting to EHRs can improve significantly the accuracy for predicting the inter-patient co-occurrence of diagnosis when compared with equal and generic term weighting approaches.

  10. Cosmological model with anisotropic dark energy and self-similarity of the second kind

    International Nuclear Information System (INIS)

    Brandt, Carlos F. Charret; Silva, Maria de Fatima A. da; Rocha, Jaime F. Villas da; Chan, Roberto

    2006-01-01

    We study the evolution of an anisotropic fluid with self-similarity of the second kind. We found a class of solution to the Einstein field equations by assuming an equation of state where the radial pressure of the fluid is proportional to its energy density (p r =ωρ) and that the fluid moves along time-like geodesics. The equation of state and the anisotropy with self-similarity of second kind imply ω = -1. The energy conditions, geometrical and physical properties of the solutions are studied. We have found that for the parameter α=-1/2 , it may represent a Big Rip cosmological model. (author)

  11. Evaporator modeling - A hybrid approach

    International Nuclear Information System (INIS)

    Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun

    2009-01-01

    In this paper, a hybrid modeling approach is proposed to model two-phase flow evaporators. The main procedures for hybrid modeling includes: (1) Based on the energy and material balance, and thermodynamic principles to formulate the process fundamental governing equations; (2) Select input/output (I/O) variables responsible to the system performance which can be measured and controlled; (3) Represent those variables existing in the original equations but are not measurable as simple functions of selected I/Os or constants; (4) Obtaining a single equation which can correlate system inputs and outputs; and (5) Identify unknown parameters by linear or nonlinear least-squares methods. The method takes advantages of both physical and empirical modeling approaches and can accurately predict performance in wide operating range and in real-time, which can significantly reduce the computational burden and increase the prediction accuracy. The model is verified with the experimental data taken from a testing system. The testing results show that the proposed model can predict accurately the performance of the real-time operating evaporator with the maximum error of ±8%. The developed models will have wide applications in operational optimization, performance assessment, fault detection and diagnosis

  12. On the scale similarity in large eddy simulation. A proposal of a new model

    International Nuclear Information System (INIS)

    Pasero, E.; Cannata, G.; Gallerano, F.

    2004-01-01

    Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The

  13. Conservation of connectivity of model-space effective interactions under a class of similarity transformation

    International Nuclear Information System (INIS)

    Duan Changkui; Gong Yungui; Dong Huining; Reid, Michael F.

    2004-01-01

    Effective interaction operators usually act on a restricted model space and give the same energies (for Hamiltonian) and matrix elements (for transition operators, etc.) as those of the original operators between the corresponding true eigenstates. Various types of effective operators are possible. Those well defined effective operators have been shown to be related to each other by similarity transformation. Some of the effective operators have been shown to have connected-diagram expansions. It is shown in this paper that under a class of very general similarity transformations, the connectivity is conserved. The similarity transformation between Hermitian and non-Hermitian Rayleigh-Schroedinger perturbative effective operators is one of such transformations and hence the connectivity can be deducted from each other

  14. Conservation of connectivity of model-space effective interactions under a class of similarity transformation.

    Science.gov (United States)

    Duan, Chang-Kui; Gong, Yungui; Dong, Hui-Ning; Reid, Michael F

    2004-09-15

    Effective interaction operators usually act on a restricted model space and give the same energies (for Hamiltonian) and matrix elements (for transition operators, etc.) as those of the original operators between the corresponding true eigenstates. Various types of effective operators are possible. Those well defined effective operators have been shown to be related to each other by similarity transformation. Some of the effective operators have been shown to have connected-diagram expansions. It is shown in this paper that under a class of very general similarity transformations, the connectivity is conserved. The similarity transformation between Hermitian and non-Hermitian Rayleigh-Schrodinger perturbative effective operators is one of such transformations and hence the connectivity can be deducted from each other.

  15. Model-free aftershock forecasts constructed from similar sequences in the past

    Science.gov (United States)

    van der Elst, N.; Page, M. T.

    2017-12-01

    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity

  16. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  17. A self-similar model for conduction in the plasma erosion opening switch

    International Nuclear Information System (INIS)

    Mosher, D.; Grossmann, J.M.; Ottinger, P.F.; Colombant, D.G.

    1987-01-01

    The conduction phase of the plasma erosion opening switch (PEOS) is characterized by combining a 1-D fluid model for plasma hydrodynamics, Maxwell's equations, and a 2-D electron-orbit analysis. A self-similar approximation for the plasma and field variables permits analytic expressions for their space and time variations to be derived. It is shown that a combination of axial MHD compression and magnetic insulation of high-energy electrons emitted from the switch cathode can control the character of switch conduction. The analysis highlights the need to include additional phenomena for accurate fluid modeling of PEOS conduction

  18. A similarity score-based two-phase heuristic approach to solve the dynamic cellular facility layout for manufacturing systems

    Science.gov (United States)

    Kumar, Ravi; Singh, Surya Prakash

    2017-11-01

    The dynamic cellular facility layout problem (DCFLP) is a well-known NP-hard problem. It has been estimated that the efficient design of DCFLP reduces the manufacturing cost of products by maintaining the minimum material flow among all machines in all cells, as the material flow contributes around 10-30% of the total product cost. However, being NP hard, solving the DCFLP optimally is very difficult in reasonable time. Therefore, this article proposes a novel similarity score-based two-phase heuristic approach to solve the DCFLP optimally considering multiple products in multiple times to be manufactured in the manufacturing layout. In the first phase of the proposed heuristic, a machine-cell cluster is created based on similarity scores between machines. This is provided as an input to the second phase to minimize inter/intracell material handling costs and rearrangement costs over the entire planning period. The solution methodology of the proposed approach is demonstrated. To show the efficiency of the two-phase heuristic approach, 21 instances are generated and solved using the optimization software package LINGO. The results show that the proposed approach can optimally solve the DCFLP in reasonable time.

  19. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  20. State impulsive control strategies for a two-languages competitive model with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Nie, Lin-Fei; Teng, Zhi-Dong; Nieto, Juan J.; Jung, Il Hyo

    2015-07-01

    For reasons of preserving endangered languages, we propose, in this paper, a novel two-languages competitive model with bilingualism and interlinguistic similarity, where state-dependent impulsive control strategies are introduced. The novel control model includes two control threshold values, which are different from the previous state-dependent impulsive differential equations. By using qualitative analysis method, we obtain that the control model exhibits two stable positive order-1 periodic solutions under some general conditions. Moreover, numerical simulations clearly illustrate the main theoretical results and feasibility of state-dependent impulsive control strategies. Meanwhile numerical simulations also show that state-dependent impulsive control strategy can be applied to other general two-languages competitive model and obtain the desired result. The results indicate that the fractions of two competitive languages can be kept within a reasonable level under almost any circumstances. Theoretical basis for finding a new control measure to protect the endangered language is offered.

  1. Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor

    Directory of Open Access Journals (Sweden)

    Ye Li

    2017-01-01

    Full Text Available Recommended system is beneficial to e-commerce sites, which provides customers with product information and recommendations; the recommendation system is currently widely used in many fields. In an era of information explosion, the key challenges of the recommender system is to obtain valid information from the tremendous amount of information and produce high quality recommendations. However, when facing the large mount of information, the traditional collaborative filtering algorithm usually obtains a high degree of sparseness, which ultimately lead to low accuracy recommendations. To tackle this issue, we propose a novel algorithm named Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor, which is based on the trust model and is combined with the user similarity. The novel algorithm takes into account the degree of interest overlap between the two users and results in a superior performance to the recommendation based on Trust Model in criteria of Precision, Recall, Diversity and Coverage. Additionally, the proposed model can effectively improve the efficiency of collaborative filtering algorithm and achieve high performance.

  2. Interactive exploration of the vulnerability of the human infrastructure: an approach using simultaneous display of similar locations

    Science.gov (United States)

    Ceré, Raphaël; Kaiser, Christian

    2015-04-01

    models (DEM) or individual building vector layers. Morphological properties can be calculated for different scales using different moving window sizes. Multi-scale measures such as fractal or lacunarity can be integrated into the analysis. Other properties such as different densities and ratios are also easy to calculate and include. Based on a rather extensive set of properties or features, a feature selection or extraction method such as Principal Component Analysis can be used to obtain a subset of relevant properties. In a second step, an unsupervised classification algorithm such as Self-Organizing Maps can be used to group similar locations together, and criteria such as the intra-group distance and geographic distribution can be used for selecting relevant locations to be displayed in an interactive data exploration interface along with a given main location. A case study for a part of Switzerland illustrates the presented approach within a working interactive tool, showing the feasibility and allowing for an investigation of the usefulness of our method.

  3. Scaling and interaction of self-similar modes in models of high Reynolds number wall turbulence.

    Science.gov (United States)

    Sharma, A S; Moarref, R; McKeon, B J

    2017-03-13

    Previous work has established the usefulness of the resolvent operator that maps the terms nonlinear in the turbulent fluctuations to the fluctuations themselves. Further work has described the self-similarity of the resolvent arising from that of the mean velocity profile. The orthogonal modes provided by the resolvent analysis describe the wall-normal coherence of the motions and inherit that self-similarity. In this contribution, we present the implications of this similarity for the nonlinear interaction between modes with different scales and wall-normal locations. By considering the nonlinear interactions between modes, it is shown that much of the turbulence scaling behaviour in the logarithmic region can be determined from a single arbitrarily chosen reference plane. Thus, the geometric scaling of the modes is impressed upon the nonlinear interaction between modes. Implications of these observations on the self-sustaining mechanisms of wall turbulence, modelling and simulation are outlined.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  4. Self-similar measures in multi-sector endogenous growth models

    International Nuclear Information System (INIS)

    La Torre, Davide; Marsiglio, Simone; Mendivil, Franklin; Privileggi, Fabio

    2015-01-01

    We analyze two types of stochastic discrete time multi-sector endogenous growth models, namely a basic Uzawa–Lucas (1965, 1988) model and an extended three-sector version as in La Torre and Marsiglio (2010). As in the case of sustained growth the optimal dynamics of the state variables are not stationary, we focus on the dynamics of the capital ratio variables, and we show that, through appropriate log-transformations, they can be converted into affine iterated function systems converging to an invariant distribution supported on some (possibly fractal) compact set. This proves that also the steady state of endogenous growth models—i.e., the stochastic balanced growth path equilibrium—might have a fractal nature. We also provide some sufficient conditions under which the associated self-similar measures turn out to be either singular or absolutely continuous (for the three-sector model we only consider the singularity).

  5. Self-similar formation of the Kolmogorov spectrum in the Leith model of turbulence

    International Nuclear Information System (INIS)

    Nazarenko, S V; Grebenev, V N

    2017-01-01

    The last stage of evolution toward the stationary Kolmogorov spectrum of hydrodynamic turbulence is studied using the Leith model [1]. This evolution is shown to manifest itself as a reflection wave in the wavenumber space propagating from the largest toward the smallest wavenumbers, and is described by a self-similar solution of a new (third) kind. This stage follows the previously studied stage of an initial explosive propagation of the spectral front from the smallest to the largest wavenumbers reaching arbitrarily large wavenumbers in a finite time, and which was described by a self-similar solution of the second kind [2–4]. Nonstationary solutions corresponding to ‘warm cascades’ characterised by a thermalised spectrum at large wavenumbers are also obtained. (paper)

  6. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating......, and allows direct incorporation of high-level and qualitative plant knowledge into themodel. These advantages have proven to be very appealing for industrial applications, and the practical, intuitively appealing nature of the framework isdemonstrated in chapters describing applications of local methods...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning...

  7. A solvable self-similar model of the sausage instability in a resistive Z pinch

    International Nuclear Information System (INIS)

    Lampe, M.

    1991-01-01

    A solvable model is developed for the linearized sausage mode within the context of resistive magnetohydrodynamics. The model is based on the assumption that the fluid motion of the plasma is self-similar, as well as several assumptions pertinent to the limit of wavelength long compared to the pinch radius. The perturbations to the magnetic field are not assumed to be self-similar, but rather are calculated. Effects arising from time dependences of the z-independent perturbed state, e.g., current rising as t α , Ohmic heating, and time variation of the pinch radius, are included in the analysis. The formalism appears to provide a good representation of ''global'' modes that involve coherent sausage distortion of the entire cross section of the pinch, but excludes modes that are localized radially, and higher radial eigenmodes. For this and other reasons, it is expected that the model underestimates the maximum instability growth rates, but is reasonable for global sausage modes. The net effect of resistivity and time variation of the unperturbed state is to decrease the growth rate if α approx-lt 1, but never by more than a factor of about 2. The effect is to increase the growth rate if α approx-gt 1

  8. A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities

    Science.gov (United States)

    Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.

    2017-12-01

    The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.

  9. Initial virtual flight test for a dynamically similar aircraft model with control augmentation system

    Directory of Open Access Journals (Sweden)

    Linliang Guo

    2017-04-01

    Full Text Available To satisfy the validation requirements of flight control law for advanced aircraft, a wind tunnel based virtual flight testing has been implemented in a low speed wind tunnel. A 3-degree-of-freedom gimbal, ventrally installed in the model, was used in conjunction with an actively controlled dynamically similar model of aircraft, which was equipped with the inertial measurement unit, attitude and heading reference system, embedded computer and servo-actuators. The model, which could be rotated around its center of gravity freely by the aerodynamic moments, together with the flow field, operator and real time control system made up the closed-loop testing circuit. The model is statically unstable in longitudinal direction, and it can fly stably in wind tunnel with the function of control augmentation of the flight control laws. The experimental results indicate that the model responds well to the operator’s instructions. The response of the model in the tests shows reasonable agreement with the simulation results. The difference of response of angle of attack is less than 0.5°. The effect of stability augmentation and attitude control law was validated in the test, meanwhile the feasibility of virtual flight test technique treated as preliminary evaluation tool for advanced flight vehicle configuration research was also verified.

  10. HEDR modeling approach: Revision 1

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1994-05-01

    This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies

  11. Vertex labeling and routing in self-similar outerplanar unclustered graphs modeling complex networks

    International Nuclear Information System (INIS)

    Comellas, Francesc; Miralles, Alicia

    2009-01-01

    This paper introduces a labeling and optimal routing algorithm for a family of modular, self-similar, small-world graphs with clustering zero. Many properties of this family are comparable to those of networks associated with technological and biological systems with low clustering, such as the power grid, some electronic circuits and protein networks. For these systems, the existence of models with an efficient routing protocol is of interest to design practical communication algorithms in relation to dynamical processes (including synchronization) and also to understand the underlying mechanisms that have shaped their particular structure.

  12. Self-similarities of periodic structures for a discrete model of a two-gene system

    International Nuclear Information System (INIS)

    Souza, S.L.T. de; Lima, A.A.; Caldas, I.L.; Medrano-T, R.O.; Guimarães-Filho, Z.O.

    2012-01-01

    We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.

  13. Self-similarities of periodic structures for a discrete model of a two-gene system

    Energy Technology Data Exchange (ETDEWEB)

    Souza, S.L.T. de, E-mail: thomaz@ufsj.edu.br [Departamento de Física e Matemática, Universidade Federal de São João del-Rei, Ouro Branco, MG (Brazil); Lima, A.A. [Escola de Farmácia, Universidade Federal de Ouro Preto, Ouro Preto, MG (Brazil); Caldas, I.L. [Instituto de Física, Universidade de São Paulo, São Paulo, SP (Brazil); Medrano-T, R.O. [Departamento de Ciências Exatas e da Terra, Universidade Federal de São Paulo, Diadema, SP (Brazil); Guimarães-Filho, Z.O. [Aix-Marseille Univ., CNRS PIIM UMR6633, International Institute for Fusion Science, Marseille (France)

    2012-03-12

    We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.

  14. Biomarker- and similarity coefficient-based approaches to bacterial mixture characterization using matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS).

    Science.gov (United States)

    Zhang, Lin; Smart, Sonja; Sandrin, Todd R

    2015-11-05

    MALDI-TOF MS profiling has been shown to be a rapid and reliable method to characterize pure cultures of bacteria. Currently, there is keen interest in using this technique to identify bacteria in mixtures. Promising results have been reported with two- or three-isolate model systems using biomarker-based approaches. In this work, we applied MALDI-TOF MS-based methods to a more complex model mixture containing six bacteria. We employed: 1) a biomarker-based approach that has previously been shown to be useful in identification of individual bacteria in pure cultures and simple mixtures and 2) a similarity coefficient-based approach that is routinely and nearly exclusively applied to identification of individual bacteria in pure cultures. Both strategies were developed and evaluated using blind-coded mixtures. With regard to the biomarker-based approach, results showed that most peaks in mixture spectra could be assigned to those found in spectra of each component bacterium; however, peaks shared by two isolates as well as peaks that could not be assigned to any individual component isolate were observed. For two-isolate blind-coded samples, bacteria were correctly identified using both similarity coefficient- and biomarker-based strategies, while for blind-coded samples containing more than two isolates, bacteria were more effectively identified using a biomarker-based strategy.

  15. Individual differences in emotion processing: how similar are diffusion model parameters across tasks?

    Science.gov (United States)

    Mueller, Christina J; White, Corey N; Kuchinke, Lars

    2017-11-27

    The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.

  16. Non-frontal Model Based Approach to Forensic Face Recognition

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance

  17. Analysis and Modeling of Time-Correlated Characteristics of Rainfall-Runoff Similarity in the Upstream Red River Basin

    Directory of Open Access Journals (Sweden)

    Xiuli Sang

    2012-01-01

    Full Text Available We constructed a similarity model (based on Euclidean distance between rainfall and runoff to study time-correlated characteristics of rainfall-runoff similar patterns in the upstream Red River Basin and presented a detailed evaluation of the time correlation of rainfall-runoff similarity. The rainfall-runoff similarity was used to determine the optimum similarity. The results showed that a time-correlated model was found to be capable of predicting the rainfall-runoff similarity in the upstream Red River Basin in a satisfactory way. Both noised and denoised time series by thresholding the wavelet coefficients were applied to verify the accuracy of model. And the corresponding optimum similar sets obtained as the equation solution conditions showed an interesting and stable trend. On the whole, the annual mean similarity presented a gradually rising trend, for quantitatively estimating comprehensive influence of climate change and of human activities on rainfall-runoff similarity.

  18. An analytic solution of a model of language competition with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Otero-Espinar, M. V.; Seoane, L. F.; Nieto, J. J.; Mira, J.

    2013-12-01

    An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also suggest ways to further improve the model and facilitate the comparison of its consequences with those from other approaches or with real data.

  19. Geomfinder: a multi-feature identifier of similar three-dimensional protein patterns: a ligand-independent approach.

    Science.gov (United States)

    Núñez-Vivanco, Gabriel; Valdés-Jiménez, Alejandro; Besoaín, Felipe; Reyes-Parada, Miguel

    2016-01-01

    Since the structure of proteins is more conserved than the sequence, the identification of conserved three-dimensional (3D) patterns among a set of proteins, can be important for protein function prediction, protein clustering, drug discovery and the establishment of evolutionary relationships. Thus, several computational applications to identify, describe and compare 3D patterns (or motifs) have been developed. Often, these tools consider a 3D pattern as that described by the residues surrounding co-crystallized/docked ligands available from X-ray crystal structures or homology models. Nevertheless, many of the protein structures stored in public databases do not provide information about the location and characteristics of ligand binding sites and/or other important 3D patterns such as allosteric sites, enzyme-cofactor interaction motifs, etc. This makes necessary the development of new ligand-independent methods to search and compare 3D patterns in all available protein structures. Here we introduce Geomfinder, an intuitive, flexible, alignment-free and ligand-independent web server for detailed estimation of similarities between all pairs of 3D patterns detected in any two given protein structures. We used around 1100 protein structures to form pairs of proteins which were assessed with Geomfinder. In these analyses each protein was considered in only one pair (e.g. in a subset of 100 different proteins, 50 pairs of proteins can be defined). Thus: (a) Geomfinder detected identical pairs of 3D patterns in a series of monoamine oxidase-B structures, which corresponded to the effectively similar ligand binding sites at these proteins; (b) we identified structural similarities among pairs of protein structures which are targets of compounds such as acarbose, benzamidine, adenosine triphosphate and pyridoxal phosphate; these similar 3D patterns are not detected using sequence-based methods; (c) the detailed evaluation of three specific cases showed the versatility

  20. The synaptonemal complex of basal metazoan hydra: more similarities to vertebrate than invertebrate meiosis model organisms.

    Science.gov (United States)

    Fraune, Johanna; Wiesner, Miriam; Benavente, Ricardo

    2014-03-20

    The synaptonemal complex (SC) is an evolutionarily well-conserved structure that mediates chromosome synapsis during prophase of the first meiotic division. Although its structure is conserved, the characterized protein components in the current metazoan meiosis model systems (Drosophila melanogaster, Caenorhabditis elegans, and Mus musculus) show no sequence homology, challenging the question of a single evolutionary origin of the SC. However, our recent studies revealed the monophyletic origin of the mammalian SC protein components. Many of them being ancient in Metazoa and already present in the cnidarian Hydra. Remarkably, a comparison between different model systems disclosed a great similarity between the SC components of Hydra and mammals while the proteins of the ecdysozoan systems (D. melanogaster and C. elegans) differ significantly. In this review, we introduce the basal-branching metazoan species Hydra as a potential novel invertebrate model system for meiosis research and particularly for the investigation of SC evolution, function and assembly. Also, available methods for SC research in Hydra are summarized. Copyright © 2014. Published by Elsevier Ltd.

  1. Stereotype content model across cultures: Towards universal similarities and some differences

    Science.gov (United States)

    Cuddy, Amy J. C.; Fiske, Susan T.; Kwan, Virginia S. Y.; Glick, Peter; Demoulin, Stéphanie; Leyens, Jacques-Philippe; Bond, Michael Harris; Croizet, Jean-Claude; Ellemers, Naomi; Sleebos, Ed; Htun, Tin Tin; Kim, Hyun-Jeong; Maio, Greg; Perry, Judi; Petkova, Kristina; Todorov, Valery; Rodríguez-Bailón, Rosa; Morales, Elena; Moya, Miguel; Palacios, Marisol; Smith, Vanessa; Perez, Rolando; Vala, Jorge; Ziegler, Rene

    2014-01-01

    The stereotype content model (SCM) proposes potentially universal principles of societal stereotypes and their relation to social structure. Here, the SCM reveals theoretically grounded, cross-cultural, cross-groups similarities and one difference across 10 non-US nations. Seven European (individualist) and three East Asian (collectivist) nations (N = 1, 028) support three hypothesized cross-cultural similarities: (a) perceived warmth and competence reliably differentiate societal group stereotypes; (b) many out-groups receive ambivalent stereotypes (high on one dimension; low on the other); and (c) high status groups stereotypically are competent, whereas competitive groups stereotypically lack warmth. Data uncover one consequential cross-cultural difference: (d) the more collectivist cultures do not locate reference groups (in-groups and societal prototype groups) in the most positive cluster (high-competence/high-warmth), unlike individualist cultures. This demonstrates out-group derogation without obvious reference-group favouritism. The SCM can serve as a pancultural tool for predicting group stereotypes from structural relations with other groups in society, and comparing across societies. PMID:19178758

  2. Similar Biophysical Abnormalities in Glomeruli and Podocytes from Two Distinct Models.

    Science.gov (United States)

    Embry, Addie E; Liu, Zhenan; Henderson, Joel M; Byfield, F Jefferson; Liu, Liping; Yoon, Joonho; Wu, Zhenzhen; Cruz, Katrina; Moradi, Sara; Gillombardo, C Barton; Hussain, Rihanna Z; Doelger, Richard; Stuve, Olaf; Chang, Audrey N; Janmey, Paul A; Bruggeman, Leslie A; Miller, R Tyler

    2018-03-23

    Background FSGS is a pattern of podocyte injury that leads to loss of glomerular function. Podocytes support other podocytes and glomerular capillary structure, oppose hemodynamic forces, form the slit diaphragm, and have mechanical properties that permit these functions. However, the biophysical characteristics of glomeruli and podocytes in disease remain unclear. Methods Using microindentation, atomic force microscopy, immunofluorescence microscopy, quantitative RT-PCR, and a three-dimensional collagen gel contraction assay, we studied the biophysical and structural properties of glomeruli and podocytes in chronic (Tg26 mice [HIV protein expression]) and acute (protamine administration [cytoskeletal rearrangement]) models of podocyte injury. Results Compared with wild-type glomeruli, Tg26 glomeruli became progressively more deformable with disease progression, despite increased collagen content. Tg26 podocytes had disordered cytoskeletons, markedly abnormal focal adhesions, and weaker adhesion; they failed to respond to mechanical signals and exerted minimal traction force in three-dimensional collagen gels. Protamine treatment had similar but milder effects on glomeruli and podocytes. Conclusions Reduced structural integrity of Tg26 podocytes causes increased deformability of glomerular capillaries and limits the ability of capillaries to counter hemodynamic force, possibly leading to further podocyte injury. Loss of normal podocyte mechanical integrity could injure neighboring podocytes due to the absence of normal biophysical signals required for podocyte maintenance. The severe defects in podocyte mechanical behavior in the Tg26 model may explain why Tg26 glomeruli soften progressively, despite increased collagen deposition, and may be the basis for the rapid course of glomerular diseases associated with severe podocyte injury. In milder injury (protamine), similar processes occur but over a longer time. Copyright © 2018 by the American Society of Nephrology.

  3. Breastfeeding support for adolescent mothers: similarities and differences in the approach of midwives and qualified breastfeeding supporters

    Directory of Open Access Journals (Sweden)

    Burt Susan

    2006-11-01

    Full Text Available Abstract Background The protection, promotion and support of breastfeeding are now major public health priorities. It is well established that skilled support, voluntary or professional, proactively offered to women who want to breastfeed, can increase the initiation and/or duration of breastfeeding. Low levels of breastfeeding uptake and continuation amongst adolescent mothers in industrialised countries suggest that this is a group that is in particular need of breastfeeding support. Using qualitative methods, the present study aimed to investigate the similarities and differences in the approaches of midwives and qualified breastfeeding supporters (the Breastfeeding Network (BfN in supporting breastfeeding adolescent mothers. Methods The study was conducted in the North West of England between September 2001 and October 2002. The supportive approaches of 12 midwives and 12 BfN supporters were evaluated using vignettes, short descriptions of an event designed to obtain specific information from participants about their knowledge, perceptions and attitudes to a particular situation. Responses to vignettes were analysed using thematic networks analysis, involving the extraction of basic themes by analysing each script line by line. The basic themes were then grouped to form organising themes and finally central global themes. Discussion and consensus was reached related to the systematic development of the three levels of theme. Results Five components of support were identified: emotional, esteem, instrumental, informational and network support. Whilst the supportive approaches of both groups incorporated elements of each of the five components of support, BfN supporters placed greater emphasis upon providing emotional and esteem support and highlighted the need to elicit the mothers' existing knowledge, checking understanding through use of open questions and utilising more tentative language. Midwives were more directive and gave more

  4. An integrated multicriteria decision-making approach for evaluating nuclear fuel cycle systems for long-term sustainability on the basis of an equilibrium model: Technique for order of preference by similarity to ideal solution, preference ranking organization method for enrichment evaluation, and multiattribute utility theory combined with analytic hierarchy process

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Sae Rom [Dept of Quantum Energy Chemical Engineering, Korea University of Science and Technology (KUST), Daejeon (Korea, Republic of); Choi, Sung Yeol [Ulsan National Institute of Science and Technology, Ulju (Korea, Republic of); Ko, Wonil [Nonproliferation System Development Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2017-02-15

    The focus on the issues surrounding spent nuclear fuel and lifetime extension of old nuclear power plants continues to grow nowadays. A transparent decision-making process to identify the best suitable nuclear fuel cycle (NFC) is considered to be the key task in the current situation. Through this study, an attempt is made to develop an equilibrium model for the NFC to calculate the material flows based on 1 TWh of electricity production, and to perform integrated multicriteria decision-making method analyses via the analytic hierarchy process technique for order of preference by similarity to ideal solution, preference ranking organization method for enrichment evaluation, and multiattribute utility theory methods. This comparative study is aimed at screening and ranking the three selected NFC options against five aspects: sustainability, environmental friendliness, economics, proliferation resistance, and technical feasibility. The selected fuel cycle options include pressurized water reactor (PWR) once-through cycle, PWR mixed oxide cycle, or pyroprocessing sodium-cooled fast reactor cycle. A sensitivity analysis was performed to prove the robustness of the results and explore the influence of criteria on the obtained ranking. As a result of the comparative analysis, the pyroprocessing sodium-cooled fast reactor cycle is determined to be the most competitive option among the NFC scenarios.

  5. An integrated multicriteria decision-making approach for evaluating nuclear fuel cycle systems for long-term sustainability on the basis of an equilibrium model: Technique for order of preference by similarity to ideal solution, preference ranking organization method for enrichment evaluation, and multiattribute utility theory combined with analytic hierarchy process

    International Nuclear Information System (INIS)

    Yoon, Sae Rom; Choi, Sung Yeol; Ko, Wonil

    2017-01-01

    The focus on the issues surrounding spent nuclear fuel and lifetime extension of old nuclear power plants continues to grow nowadays. A transparent decision-making process to identify the best suitable nuclear fuel cycle (NFC) is considered to be the key task in the current situation. Through this study, an attempt is made to develop an equilibrium model for the NFC to calculate the material flows based on 1 TWh of electricity production, and to perform integrated multicriteria decision-making method analyses via the analytic hierarchy process technique for order of preference by similarity to ideal solution, preference ranking organization method for enrichment evaluation, and multiattribute utility theory methods. This comparative study is aimed at screening and ranking the three selected NFC options against five aspects: sustainability, environmental friendliness, economics, proliferation resistance, and technical feasibility. The selected fuel cycle options include pressurized water reactor (PWR) once-through cycle, PWR mixed oxide cycle, or pyroprocessing sodium-cooled fast reactor cycle. A sensitivity analysis was performed to prove the robustness of the results and explore the influence of criteria on the obtained ranking. As a result of the comparative analysis, the pyroprocessing sodium-cooled fast reactor cycle is determined to be the most competitive option among the NFC scenarios

  6. An Integrated Multicriteria Decision-Making Approach for Evaluating Nuclear Fuel Cycle Systems for Long-term Sustainability on the Basis of an Equilibrium Model: Technique for Order of Preference by Similarity to Ideal Solution, Preference Ranking Organization Method for Enrichment Evaluation, and Multiattribute Utility Theory Combined with Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Saerom Yoon

    2017-02-01

    Full Text Available The focus on the issues surrounding spent nuclear fuel and lifetime extension of old nuclear power plants continues to grow nowadays. A transparent decision-making process to identify the best suitable nuclear fuel cycle (NFC is considered to be the key task in the current situation. Through this study, an attempt is made to develop an equilibrium model for the NFC to calculate the material flows based on 1 TWh of electricity production, and to perform integrated multicriteria decision-making method analyses via the analytic hierarchy process technique for order of preference by similarity to ideal solution, preference ranking organization method for enrichment evaluation, and multiattribute utility theory methods. This comparative study is aimed at screening and ranking the three selected NFC options against five aspects: sustainability, environmental friendliness, economics, proliferation resistance, and technical feasibility. The selected fuel cycle options include pressurized water reactor (PWR once-through cycle, PWR mixed oxide cycle, or pyroprocessing sodium-cooled fast reactor cycle. A sensitivity analysis was performed to prove the robustness of the results and explore the influence of criteria on the obtained ranking. As a result of the comparative analysis, the pyroprocessing sodium-cooled fast reactor cycle is determined to be the most competitive option among the NFC scenarios.

  7. More Similar than Different? Exploring Cultural Models of Depression among Latino Immigrants in Florida

    Directory of Open Access Journals (Sweden)

    Dinorah (Dina Martinez Tyson

    2011-01-01

    Full Text Available The Surgeon General's report, “Culture, Race, and Ethnicity: A Supplement to Mental Health,” points to the need for subgroup specific mental health research that explores the cultural variation and heterogeneity of the Latino population. Guided by cognitive anthropological theories of culture, we utilized ethnographic interviewing techniques to explore cultural models of depression among foreign-born Mexican (n=30, Cuban (n=30, Columbian (n=30, and island-born Puerto Ricans (n=30, who represent the largest Latino groups in Florida. Results indicate that Colombian, Cuban, Mexican, and Puerto Rican immigrants showed strong intragroup consensus in their models of depression causality, symptoms, and treatment. We found more agreement than disagreement among all four groups regarding core descriptions of depression, which was largely unexpected but can potentially be explained by their common immigrant experiences. Findings expand our understanding about Latino subgroup similarities and differences in their conceptualization of depression and can be used to inform the adaptation of culturally relevant interventions in order to better serve Latino immigrant communities.

  8. Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.

    Directory of Open Access Journals (Sweden)

    Octavio Miramontes

    Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.

  9. Propriedades termofísicas de soluções-modelo similares a sucos: parte II Thermophysical properties of model solutions similar to juice: part II

    Directory of Open Access Journals (Sweden)

    Sílvia Cristina Sobottka Rolim de Moura

    2005-09-01

    Full Text Available Propriedades termofísicas, densidade e viscosidade de soluções-modelo similares a sucos foram determinadas experimentalmente. Os resultados foram comparados aos preditos por modelos matemáticos (STATISTICA 6.0 e obtidos da literatura em função da sua composição química. Para definição das soluções-modelo, foi realizado um planejamento estrela, mantendo-se fixa a quanti-dade de ácido (1,5% e variando-se a água (82-98,5%, o carboidrato (0-15% e a gordura (0-1,5%. A densidade foi determinada em picnômetro. A viscosidade foi determinada em viscosímetro Brookfield modelo LVF. A condutividade térmica foi calculada com o conhecimento das propriedades difusividade térmica e calor específico (apresentados na Parte I deste trabalho MOURA [7] e da densidade. Os resultados de cada propriedade foram analisados através de superfícies de respostas. Foram encontrados resultados significativos para as propriedades, mostrando que os modelos encontrados representam as mudanças das propriedades térmicas e físicas dos sucos, com alterações na composição e na temperatura.Thermophysical properties, density and viscosity of model solutions similar to juices were experimentally determined. The results were compared to those predicted by mathematical models (STATISTIC 6.0 and to values mentioned in the literature, according to the chemical composition. A star planning was adopted to define model solutions composition; fixing the acid amount in 1.5% and varying water (82-98.5%, carbohydrate (0-15% and fat (0-1.5%. The density was determined by picnometer. The viscosity was determined by Brookfield LVF model viscosimeter. The thermal conductivity was calculated based on thermal diffusivity and specific heat values (presented at the 1st . Part of this paper - MOURA [7] and density. The results of each property were analyzed by the response surface method. The found results were significant, indicating that the models represent the changes of

  10. Approach to analysis of inter-regional similarity of investment activity support measures in legislation of regions (on the example of Krasnoyarsk region

    Directory of Open Access Journals (Sweden)

    Valentina F. Lapo

    2017-01-01

    revealed concurrence of dynamics to use some stimulation methods in Krasnoyarsk region and in the other regions of the Russian Federation for 2005 - 2016. Among them, there are some measures of subsidizing, concessionary terms of using the ground and granting of the ground areas, concession agreements on property and creation of industrial parks, state-private partnership and others. We have found the groups of regions in which there are the trends on harmonization or on a divergence to use the separate kinds of stimulation.Thus, the offered approach and a set of measuring instruments enable to carry out the research of inter-regional similarity of legal documents. It allows receiving the answer to a question on a degree of similarity of systems of investment activity stimulation, accepted in regions; to estimate joint dynamics of stimulation systems’ changes; to define a degree of concurrence on separate measures of support; to analyze similarity of a current condition and processes on the change of the legislation. The approach permits to investigate directions of harmonization or a divergence of regional approaches to investment activity stimulation. The received results can form a base for the further economic-statistical and econometric researches of efficiency of methods for the investment activity stimulation. It will allow to structure objects of research, to allocate more homogeneous group of regions on stimulation methods. The proximity coefficient matrix will be especially useful in spatial econometric models. The offered approach and parameters can be applied to research the other positions of the regional legislation.

  11. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  12. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  13. Propriedades termofísicas de soluções modelo similares a sucos - Parte I Thermophysical properties of model solutions similar to juice - Part I

    Directory of Open Access Journals (Sweden)

    Silvia Cristina Sobottka Rolim de Moura

    2003-04-01

    Full Text Available Propriedades termofísicas, difusividade térmica e calor específico, de soluções modelo similares a sucos, foram determinadas experimentalmente e ajustadas a modelos matemáticos (STATISTICA 6.0, em função da sua composição química. Para definição das soluções modelo foi realizado um planejamento estrela mantendo-se fixa a quantidade de ácido (1,5% e variando-se a água (82-98,5%, o carboidrato (0-15% e a gordura (0-1,5%. A determinação do calor específico foi realizada através do método de Hwang & Hayakawa e a difusividade térmica com base no método de Dickerson. Os resultados de cada propriedade foram analisados através de superfícies de respostas. Foram encontrados resultados significativos para as propriedades, mostrando que os modelos encontrados representam significativamente as mudanças das propriedades térmicas dos sucos, com alterações na composição e na temperatura.Thermophysical properties, thermal diffusivity and specific heat of model solutions similar to juices were experimentally determined and the values obtained compared to those predicted by mathematical models (STATISTIC 6.0 and to values mentioned in the literature, according to the chemical composition. It was adopted a star planning to define the composition of the model solutions fixing the acid amount in 1.5% and varying water (82-98.5%, carboydrate (0-15% and fat (0-1.5%. The specific heat was determined by Hwang & Hayakawa method and the thermal diffusivity was determined by Dickerson method. The results of each property were analysed by the response surface method. The results were significative, indicating that the models represented considerably the changes of thermal properties of juices according to their composition and temperature variations.

  14. Benchmarking whole-building energy performance with multi-criteria technique for order preference by similarity to ideal solution using a selective objective-weighting approach

    International Nuclear Information System (INIS)

    Wang, Endong

    2015-01-01

    Highlights: • A TOPSIS based multi-criteria whole-building energy benchmarking is developed. • A selective objective-weighting procedure is used for a cost-accuracy tradeoff. • Results from a real case validated the benefits of the presented approach. - Abstract: This paper develops a robust multi-criteria Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based building energy efficiency benchmarking approach. The approach is explicitly selective to address multicollinearity trap due to the subjectivity in selecting energy variables by considering cost-accuracy trade-off. It objectively weights the relative importance of individual pertinent efficiency measuring criteria using either multiple linear regression or principal component analysis contingent on meta data quality. Through this approach, building energy performance is comprehensively evaluated and optimized. Simultaneously, the significant challenges associated with conventional single-criterion benchmarking models can be avoided. Together with a clustering algorithm on a three-year panel dataset, the benchmarking case of 324 single-family dwellings demonstrated an improved robustness of the presented multi-criteria benchmarking approach over the conventional single-criterion ones

  15. An application of superpositions of two-state Markovian sources to the modelling of self-similar behaviour

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis

    1997-01-01

    We present a modelling framework and a fitting method for modelling second order self-similar behaviour with the Markovian arrival process (MAP). The fitting method is based on fitting to the autocorrelation function of counts a second order self-similar process. It is shown that with this fittin...

  16. Vortex forcing model for turbulent flow over spanwise-heterogeneous topogrpahies: scaling arguments and similarity solution

    Science.gov (United States)

    Anderson, William; Yang, Jianzhi

    2017-11-01

    Spanwise surface heterogeneity beneath high-Reynolds number, fully-rough wall turbulence is known to induce mean secondary flows in the form of counter-rotating streamwise vortices. The secondary flows are a manifestation of Prandtl's secondary flow of the second kind - driven and sustained by spatial heterogeneity of components of the turbulent (Reynolds averaged) stress tensor. The spacing between adjacent surface heterogeneities serves as a control on the spatial extent of the counter-rotating cells, while their intensity is controlled by the spanwise gradient in imposed drag (where larger gradients associated with more dramatic transitions in roughness induce stronger cells). In this work, we have performed an order of magnitude analysis of the mean (Reynolds averaged) streamwise vorticity transport equation, revealing the scaling dependence of circulation upon spanwise spacing. The scaling arguments are supported by simulation data. Then, we demonstrate that mean streamwise velocity can be predicted a priori via a similarity solution to the mean streamwise vorticity transport equation. A vortex forcing term was used to represent the affects of spanwise topographic heterogeneity within the flow. Efficacy of the vortex forcing term was established with large-eddy simulation cases, wherein vortex forcing model parameters were altered to capture different values of spanwise spacing.

  17. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  18. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  19. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    OF FIGURES Spiral Model .................................................................................................3 Figure 1. Approaches in... spiral model was chosen for researching and structuring this thesis, shown in Figure 1. This approach allowed multiple iterations of source material...applications and refining through iteration. 3 Spiral Model Figure 1. D. SCOPE The research is limited to a literature review, limited

  20. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite ident...

  1. A Model for Comparative Analysis of the Similarity between Android and iOS Operating Systems

    Directory of Open Access Journals (Sweden)

    Lixandroiu R.

    2014-12-01

    Full Text Available Due to recent expansion of mobile devices, in this article we try to do an analysis of two of the most used mobile OSS. This analysis is made on the method of calculating Jaccard's similarity coefficient. To complete the analysis, we developed a hierarchy of factors in evaluating OSS. Analysis has shown that the two OSS are similar in terms of functionality, but there are a number of factors that weighted make a difference.

  2. Ethnic differences in the effects of media on body image: the effects of priming with ethnically different or similar models.

    Science.gov (United States)

    Bruns, Gina L; Carter, Michele M

    2015-04-01

    Media exposure has been positively correlated with body dissatisfaction. While body image concerns are common, being African American has been found to be a protective factor in the development of body dissatisfaction. Participants either viewed ten advertisements showing 1) ethnically-similar thin models; 2) ethnically-different thin models; 3) ethnically-similar plus-sized models; and 4) ethnically-diverse plus-sized models. Following exposure, body image was measured. African American women had less body dissatisfaction than Caucasian women. Ethnically-similar thin-model conditions did not elicit greater body dissatisfaction scores than ethnically-different thin or plus-sized models nor did the ethnicity of the model impact ratings of body dissatisfaction for women of either race. There were no differences among the African American women exposed to plus-sized versus thin models. Among Caucasian women exposure to plus-sized models resulted in greater body dissatisfaction than exposure to thin models. Results support existing literature that African American women experience less body dissatisfaction than Caucasian women even following exposure to an ethnically-similar thin model. Additionally, women exposed to plus-sized model conditions experienced greater body dissatisfaction than those shown thin models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. PROPRIEDADES TERMOFÍSICAS DE SOLUÇÕES MODELO SIMILARES A CREME DE LEITE THERMOPHYSICAL PROPERTIES OF MODEL SOLUTIONS SIMILAR TO CREAM

    Directory of Open Access Journals (Sweden)

    Silvia Cristina Sobottka Rolim de MOURA

    2001-08-01

    Full Text Available A demanda de creme de leite UHT tem aumentado significativamente. Diversas empresas diversificaram e aumentaram sua produção, visto que o consumidor, cada vez mais exigente, almeja cremes com ampla faixa de teor de gordura. O objetivo do presente trabalho foi determinar a densidade, viscosidade aparente e difusividade térmica, de soluções modelo similares a creme de leite, na faixa de temperatura de 30 a 70°C, estudando a influência do teor de gordura e da temperatura nas propriedades físicas dos produtos. O delineamento estatístico aplicado foi o planejamento 3X5, usando níveis de teor de gordura e temperatura fixos em 15%, 25% e 35%; 30°C, 40°C, 50°C, 60°C e 70°C, respectivamente (STATISTICA 6.0. Manteve-se constante a quantidade de carboidrato e de proteína, ambos em 3%. A densidade foi determinada pelo método de deslocamento de fluidos em picnômetro; a difusividade térmica com base no método de Dickerson e a viscosidade aparente foi determinada em reômetro Rheotest 2.1. Os resultados de cada propriedade foram analisados através de método de superfície de resposta. No caso destas propriedades, os dados obtidos apresentaram resultados significativos, indicando que o modelo representou de forma confiável a variação destas propriedades com a variação da gordura (% e da temperatura (°C.The requirement of UHT cream has been increased considerably. Several industries varied and increased their production, since consumers, more and more exigent, are demanding creams with a wide range of fat content. The objective of the present research was to determine the density, viscosity and thermal diffusivity of model solutions similar to cream. The range of temperature varied from 30°C to 70°C in order to study the influence of fat content and temperature in the physical properties of cream. The statistic method applied was the factorial 3X5 planning, with levels of fat content and temperature fixed in 15%, 25% and 35%; 30

  4. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship

  5. The role of visual similarity and memory in body model distortions.

    Science.gov (United States)

    Saulton, Aurelie; Longo, Matthew R; Wong, Hong Yu; Bülthoff, Heinrich H; de la Rosa, Stephan

    2016-02-01

    Several studies have shown that the perception of one's own hand size is distorted in proprioceptive localization tasks. It has been suggested that those distortions mirror somatosensory anisotropies. Recent research suggests that non-corporeal items also show some spatial distortions. In order to investigate the psychological processes underlying the localization task, we investigated the influences of visual similarity and memory on distortions observed on corporeal and non-corporeal items. In experiment 1, participants indicated the location of landmarks on: their own hand, a rubber hand (rated as most similar to the real hand), and a rake (rated as least similar to the real hand). Results show no significant differences between rake and rubber hand distortions but both items were significantly less distorted than the hand. Experiments 2 and 3 explored the role of memory in spatial distance judgments of the hand, the rake and the rubber hand. Spatial representations of items measured in experiments 2 and 3 were also distorted but showed the tendency to be smaller than in localization tasks. While memory and visual similarity seem to contribute to explain qualitative similarities in distortions between the hand and non-corporeal items, those factors cannot explain the larger magnitude observed in hand distortions. Copyright © 2015. Published by Elsevier B.V.

  6. Accessing Internal Leadership Positions at School: Testing The Similarity-Attraction Approach Regarding Gender in Three Educational Systems in Israel

    Science.gov (United States)

    Addi-Raccah, Audrey

    2006-01-01

    Background: Women school leaders may act as social agents who promote gender equality, but evidence is inconclusive regarding the effect of women's leadership on gender stratification in the workplace. Purpose: Based on the similarity-attraction perspective, this study examined male and female school leaders' relations to similar others in three…

  7. Modeling the kinetics of hydrates formation using phase field method under similar conditions of petroleum pipelines; Modelagem da cinetica de formacao de hidratos utilizando o Modelo do Campo de Fase em condicoes similares a dutos de petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mabelle Biancardi; Castro, Jose Adilson de; Silva, Alexandre Jose da [Universidade Federal Fluminense (UFF), Volta Redonda, RJ (Brazil). Programa de Pos-Graduacao em Engenharia Metalurgica], e-mails: mabelle@metal.eeimvr.uff.br; adilson@metal.eeimvr.uff.br; ajs@metal.eeimvr.uff.br

    2008-10-15

    Natural hydrates are crystalline compounds that are ice-like formed under oil extraction transportation and processing. This paper deals with the kinetics of hydrate formation by using the phase field approach coupled with the transport equation of energy. The kinetic parameters of the hydrate formation were obtained by adjusting the proposed model to experimental results in similar conditions of oil extraction. The effect of thermal and nucleation conditions were investigated while the rate of formation and morphology were obtained by numerical computation. Model results of kinetics growth and morphology presented good agreement with the experimental ones. Simulation results indicated that super-cooling and pressure were decisive parameters for hydrates growth, morphology and interface thickness. (author)

  8. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  9. Molecular Quantum Similarity Measures from Fermi hole Densities: Modeling Hammett Sigma Constants

    Czech Academy of Sciences Publication Activity Database

    Girónes, X.; Ponec, Robert

    2006-01-01

    Roč. 46, č. 3 (2006), s. 1388-1393 ISSN 1549-9596 Grant - others:SMCT(ES) SAF2000/0223/C03/01 Institutional research plan: CEZ:AV0Z40720504 Keywords : molecula quantum similarity measures * fermi hole densities * substituent effect Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.423, year: 2006

  10. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  11. Research on Kalman Filtering Algorithm for Deformation Information Series of Similar Single-Difference Model

    Institute of Scientific and Technical Information of China (English)

    L(U) Wei-cai; XU Shao-quan

    2004-01-01

    Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.

  12. Boson mapping of the shell model algebra obtained from a seniority-dictated similarity transformation

    International Nuclear Information System (INIS)

    Geyer, H.B.

    1986-01-01

    The qualitative ideas put forward by Geyer and Lee are given quantitative content by constructing a similarity transformation which reexpresses the Dyson boson images of the single-j shell fermion operators in terms of seniority bosons. It is shown that the results of Otsuka, Arima, and Iachello, or generalizations thereof which include g bosons or even bosons with J>4, can be obtained in an economic and transparent way without resorting to any comparison of matrix elements

  13. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    Science.gov (United States)

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. From epidemics to information propagation : Striking differences in structurally similar adaptive network models

    NARCIS (Netherlands)

    Trajanovski, S.; Guo, D.; Van Mieghem, P.F.A.

    2015-01-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways:

  15. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2017-12-01

    The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A study on the quantitative model of human response time using the amount and the similarity of information

    International Nuclear Information System (INIS)

    Lee, Sung Jin

    2006-02-01

    The mental capacity to retain or recall information, or memory is related to human performance during processing of information. Although a large number of studies have been carried out on human performance, little is known about the similarity effect. The purpose of this study was to propose and validate a quantitative and predictive model on human response time in the user interface with the basic concepts of information amount, similarity and degree of practice. It was difficult to explain human performance by only similarity or information amount. There were two difficulties: constructing a quantitative model on human response time and validating the proposed model by experimental work. A quantitative model based on the Hick's law, the law of practice and similarity theory was developed. The model was validated under various experimental conditions by measuring the participants' response time in the environment of a computer-based display. Human performance was improved by degree of similarity and practice in the user interface. Also we found the age-related human performance which was degraded as he or she was more elder. The proposed model may be useful for training operators who will handle some interfaces and predicting human performance by changing system design

  17. An Integrated Multicriteria Decision-Making Approach for Evaluating Nuclear Fuel Cycle Systems for Long-term Sustainability on the Basis of an Equilibrium Model: Technique for Order of Preference by Similarity to Ideal Solution, Preference Ranking Organization Method for Enrichment Evaluation, and Multiattribute Utility Theory Combined with Analytic Hierarchy Process

    OpenAIRE

    Saerom Yoon; Sungyeol Choi; Wonil Ko

    2017-01-01

    The focus on the issues surrounding spent nuclear fuel and lifetime extension of old nuclear power plants continues to grow nowadays. A transparent decision-making process to identify the best suitable nuclear fuel cycle (NFC) is considered to be the key task in the current situation. Through this study, an attempt is made to develop an equilibrium model for the NFC to calculate the material flows based on 1 TWh of electricity production, and to perform integrated multicriteria decision-makin...

  18. Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments

    Science.gov (United States)

    Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Storrs, Katherine R.; Mur, Marieke

    2017-01-01

    Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features

  19. Analyzing and leveraging self-similarity for variable resolution atmospheric models

    Science.gov (United States)

    O'Brien, Travis; Collins, William

    2015-04-01

    Variable resolution modeling techniques are rapidly becoming a popular strategy for achieving high resolution in a global atmospheric models without the computational cost of global high resolution. However, recent studies have demonstrated a variety of resolution-dependent, and seemingly artificial, features. We argue that the scaling properties of the atmosphere are key to understanding how the statistics of an atmospheric model should change with resolution. We provide two such examples. In the first example we show that the scaling properties of the cloud number distribution define how the ratio of resolved to unresolved clouds should increase with resolution. We show that the loss of resolved clouds, in the high resolution region of variable resolution simulations, with the Community Atmosphere Model version 4 (CAM4) is an artifact of the model's treatment of condensed water (this artifact is significantly reduced in CAM5). In the second example we show that the scaling properties of the horizontal velocity field, combined with the incompressibility assumption, necessarily result in an intensification of vertical mass flux as resolution increases. We show that such an increase is present in a wide variety of models, including CAM and the regional climate models of the ENSEMBLES intercomparision. We present theoretical arguments linking this increase to the intensification of precipitation with increasing resolution.

  20. Similarities between the Hubbard and Periodic Anderson Models at Finite Temperatures

    International Nuclear Information System (INIS)

    Held, K.; Huscroft, C.; Scalettar, R. T.; McMahan, A. K.

    2000-01-01

    The single band Hubbard and the two band periodic Anderson Hamiltonians have traditionally been applied to rather different physical problems--the Mott transition and itinerant magnetism, and Kondo singlet formation and scattering off localized magnetic states, respectively. In this paper, we compare the magnetic and charge correlations, and spectral functions, of the two systems. We show quantitatively that they exhibit remarkably similar behavior, including a nearly identical topology of the finite temperature phase diagrams at half filling. We address potential implications of this for theories of the rare earth ''volume collapse'' transition. (c) 2000 The American Physical Society

  1. Geometrical approach to fluid models

    International Nuclear Information System (INIS)

    Kuvshinov, B.N.; Schep, T.J.

    1997-01-01

    Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notion of invariance is introduced in terms of Lie derivatives and a general procedure for the construction of local and integral fluid invariants is presented. The solutions of the equations for invariant fields can be written in terms of Lagrange variables. A generalization of the Hamiltonian formalism for finite-dimensional systems to continuous media is proposed. Analogously to finite-dimensional systems, Hamiltonian fluids are introduced as systems that annihilate an exact two-form. It is shown that Euler and ideal, charged fluids satisfy this local definition of a Hamiltonian structure. A new class of scalar invariants of Hamiltonian fluids is constructed that generalizes the invariants that are related with gauge transformations and with symmetries (Noether). copyright 1997 American Institute of Physics

  2. Modeling the angular motion dynamics of spacecraft with a magnetic attitude control system based on experimental studies and dynamic similarity

    Science.gov (United States)

    Kulkov, V. M.; Medvedskii, A. L.; Terentyev, V. V.; Firsyuk, S. O.; Shemyakov, A. O.

    2017-12-01

    The problem of spacecraft attitude control using electromagnetic systems interacting with the Earth's magnetic field is considered. A set of dimensionless parameters has been formed to investigate the spacecraft orientation regimes based on dynamically similar models. The results of experimental studies of small spacecraft with a magnetic attitude control system can be extrapolated to the in-orbit spacecraft motion control regimes by using the methods of the dimensional and similarity theory.

  3. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  4. A general model for metabolic scaling in self-similar asymmetric networks.

    Directory of Open Access Journals (Sweden)

    Alexander Byers Brummer

    2017-03-01

    Full Text Available How a particular attribute of an organism changes or scales with its body size is known as an allometry. Biological allometries, such as metabolic scaling, have been hypothesized to result from selection to maximize how vascular networks fill space yet minimize internal transport distances and resistances. The West, Brown, Enquist (WBE model argues that these two principles (space-filling and energy minimization are (i general principles underlying the evolution of the diversity of biological networks across plants and animals and (ii can be used to predict how the resulting geometry of biological networks then governs their allometric scaling. Perhaps the most central biological allometry is how metabolic rate scales with body size. A core assumption of the WBE model is that networks are symmetric with respect to their geometric properties. That is, any two given branches within the same generation in the network are assumed to have identical lengths and radii. However, biological networks are rarely if ever symmetric. An open question is: Does incorporating asymmetric branching change or influence the predictions of the WBE model? We derive a general network model that relaxes the symmetric assumption and define two classes of asymmetrically bifurcating networks. We show that asymmetric branching can be incorporated into the WBE model. This asymmetric version of the WBE model results in several theoretical predictions for the structure, physiology, and metabolism of organisms, specifically in the case for the cardiovascular system. We show how network asymmetry can now be incorporated in the many allometric scaling relationships via total network volume. Most importantly, we show that the 3/4 metabolic scaling exponent from Kleiber's Law can still be attained within many asymmetric networks.

  5. What Models of Verbal Working Memory Can Learn from Phonological Theory: Decomposing the Phonological Similarity Effect

    Science.gov (United States)

    Schweppe, Judith; Grice, Martine; Rummer, Ralf

    2011-01-01

    Despite developments in phonology over the last few decades, models of verbal working memory make reference to phoneme-sized phonological units, rather than to the features of which they are composed. This study investigates the influence on short-term retention of such features by comparing the serial recall of lists of syllables with varying…

  6. Merging tree algorithm of growing voids in self-similar and CDM models

    NARCIS (Netherlands)

    Russell, Esra

    2013-01-01

    Observational studies show that voids are prominent features of the large-scale structure of the present-day Universe. Even though their emerging from the primordial density perturbations and evolutionary patterns differ from dark matter haloes, N-body simulations and theoretical models have shown

  7. Similar uptake profiles of microcystin-LR and -RR in an in vitro human intestinal model

    International Nuclear Information System (INIS)

    Zeller, P.; Clement, M.; Fessard, V.

    2011-01-01

    Highlights: → First description of in vitro cellular uptake of MCs into intestinal cells. → OATP 3A1 and OATP 4A1 are expressed in Caco-2 cell membranes. → MC-LR and MC-RR show similar uptake in Caco-2 cells. → MCs are probably excreted from Caco-2 cells by an active mechanism. -- Abstract: Microcystins (MCs) are cyclic hepatotoxins produced by various species of cyanobacteria. Their structure includes two variable amino acids (AA) leading to more than 80 MC variants. In this study, we focused on the most common variant, microcystin-LR (MC-LR), and microcystin-RR (MC-RR), a variant differing by only one AA. Despite their structural similarity, MC-LR elicits higher liver toxicity than MC-RR partly due to a discrepancy in their uptake by hepatic organic anion transporters (OATP 1B1 and 1B3). However, even though ingestion is the major pathway of human exposure to MCs, intestinal absorption of MCs has been poorly addressed. Consequently, we investigated the cellular uptake of the two MC variants in the human intestinal cell line Caco-2 by immunolocalization using an anti-MC antibody. Caco-2 cells were treated for 30 min to 24 h with several concentrations (1-50 μM) of both variants. We first confirmed the localization of OATP 3A1 and 4A1 at the cell membrane of Caco-2 cells. Our study also revealed a rapid uptake of both variants in less than 1 h. The uptake profiles of the two variants did not differ in our immunostaining study neither with respect to concentration nor the time of exposure. Furthermore, we have demonstrated for the first time the nuclear localization of MC-RR and confirmed that of MC-LR. Finally, our results suggest a facilitated uptake and an active excretion of MC-LR and MC-RR in Caco-2 cells. Further investigation on the role of OATP 3A1 and 4A1 in MC uptake should be useful to clarify the mechanism of intestinal absorption of MCs and contribute in risk assessment of cyanotoxin exposure.

  8. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  9. Service creation: a model-based approach

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed

  10. Models of galaxies - The modal approach

    International Nuclear Information System (INIS)

    Lin, C.C.; Lowe, S.A.

    1990-01-01

    The general viability of the modal approach to the spiral structure in normal spirals and the barlike structure in certain barred spirals is discussed. The usefulness of the modal approach in the construction of models of such galaxies is examined, emphasizing the adoption of a model appropriate to observational data for both the spiral structure of a galaxy and its basic mass distribution. 44 refs

  11. Fuzzy Similarity and Fuzzy Inclusion Measures in Polyline Matching: A Case Study of Potential Streams Identification for Archaeological Modelling in GIS

    Science.gov (United States)

    Ďuračiová, Renata; Rášová, Alexandra; Lieskovský, Tibor

    2017-12-01

    When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.

  12. Fuzzy Similarity and Fuzzy Inclusion Measures in Polyline Matching: A Case Study of Potential Streams Identification for Archaeological Modelling in GIS

    Directory of Open Access Journals (Sweden)

    Ďuračiová Renata

    2017-12-01

    Full Text Available When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.

  13. User involvement in the design of human-computer interactions: some similarities and differences between design approaches

    NARCIS (Netherlands)

    Bekker, M.M.; Long, J.B.

    1998-01-01

    This paper presents a general review of user involvement in the design of human-computer interactions, as advocated by a selection of different approaches to design. The selection comprises User-Centred Design, Participatory Design, Socio-Technical Design, Soft Systems Methodology, and Joint

  14. Examining Thematic Similarity, Difference, and Membership in Three Online Mental Health Communities from Reddit: A Text Mining and Visualization Approach.

    Science.gov (United States)

    Park, Albert; Conway, Mike; Chen, Annie T

    2018-01-01

    Social media, including online health communities, have become popular platforms for individuals to discuss health challenges and exchange social support with others. These platforms can provide support for individuals who are concerned about social stigma and discrimination associated with their illness. Although mental health conditions can share similar symptoms and even co-occur, the extent to which discussion topics in online mental health communities are similar, different, or overlapping is unknown. Discovering the topical similarities and differences could potentially inform the design of related mental health communities and patient education programs. This study employs text mining, qualitative analysis, and visualization techniques to compare discussion topics in publicly accessible online mental health communities for three conditions: Anxiety, Depression and Post-Traumatic Stress Disorder. First, online discussion content for the three conditions was collected from three Reddit communities (r/Anxiety, r/Depression, and r/PTSD). Second, content was pre-processed, and then clustered using the k -means algorithm to identify themes that were commonly discussed by members. Third, we qualitatively examined the common themes to better understand them, as well as their similarities and differences. Fourth, we employed multiple visualization techniques to form a deeper understanding of the relationships among the identified themes for the three mental health conditions. The three mental health communities shared four themes: sharing of positive emotion, gratitude for receiving emotional support, and sleep- and work-related issues. Depression clusters tended to focus on self-expressed contextual aspects of depression, whereas the Anxiety Disorders and Post-Traumatic Stress Disorder clusters addressed more treatment- and medication-related issues. Visualizations showed that discussion topics from the Anxiety Disorders and Post-Traumatic Stress Disorder subreddits

  15. More similar than you think: Frog metamorphosis as a model of human perinatal endocrinology.

    Science.gov (United States)

    Buchholz, Daniel R

    2015-12-15

    Hormonal control of development during the human perinatal period is critically important and complex with multiple hormones regulating fetal growth, brain development, and organ maturation in preparation for birth. Genetic and environmental perturbations of such hormonal control may cause irreversible morphological and physiological impairments and may also predispose individuals to diseases of adulthood, including diabetes and cardiovascular disease. Endocrine and molecular mechanisms that regulate perinatal development and that underlie the connections between early life events and adult diseases are not well elucidated. Such mechanisms are difficult to study in uterus-enclosed mammalian embryos because of confounding maternal effects. To elucidate mechanisms of developmental endocrinology in the perinatal period, Xenopus laevis the African clawed frog is a valuable vertebrate model. Frogs and humans have identical hormones which peak at birth and metamorphosis, have conserved hormone receptors and mechanisms of gene regulation, and have comparable roles for hormones in many target organs. Study of molecular and endocrine mechanisms of hormone-dependent development in frogs is advantageous because an extended free-living larval period followed by metamorphosis (1) is independent of maternal endocrine influence, (2) exhibits dramatic yet conserved developmental effects induced by thyroid and glucocorticoid hormones, and (3) begins at a developmental stage with naturally undetectable hormone levels, thereby facilitating endocrine manipulation and interpretation of results. This review highlights the utility of frog metamorphosis to elucidate molecular and endocrine actions, hormone interactions, and endocrine disruption, especially with respect to thyroid hormone. Knowledge from the frog model is expected to provide fundamental insights to aid medical understanding of endocrine disease, stress, and endocrine disruption affecting the perinatal period in humans

  16. Automated pattern analysis in gesture research : similarity measuring in 3D motion capture models of communicative action

    NARCIS (Netherlands)

    Schueller, D.; Beecks, C.; Hassani, M.; Hinnell, J.; Brenger, B.; Seidl, T.; Mittelberg, I.

    2017-01-01

    The question of how to model similarity between gestures plays an important role in current studies in the domain of human communication. Most research into recurrent patterns in co-verbal gestures – manual communicative movements emerging spontaneously during conversation – is driven by qualitative

  17. A personality and impairment approach to examine the similarities and differences between avoidant personality disorder and social anxiety disorder.

    Science.gov (United States)

    Carmichael, Kieran L C; Sellbom, Martin; Liggett, Jacqueline; Smith, Alexander

    2016-11-01

    The current study examined whether avoidant personality disorder (AvPD) and social anxiety disorder (SAD) should be considered distinct disorder constructs, which is a persistent and controversial issue in the clinical literature. We examined whether relative scores on SAD and AvPD were associated with the same personality profile and severity of impairment. The current research used a cross-sectional design and self-report inventories, including multiple measures of personality, impairment and psychopathology. Results from a mixed sample of 402 university and community participants found that scores on AvPD and SAD were similarly associated with personality traits and impairment indices. Moreover, a latent construct accounting for the shared variance for AvPD and SAD was associated with personality traits and impairment, whereas the residuals representing the uniquenesses of these disorder constructs were not. These findings support the view that AvPD and SAD are similar disorders from a phenotypic personality trait and impairment perspective. These findings are contrary to a prevalent view in the literature, known as severity continuum hypothesis, because the two disorders could not be meaningfully differentiated based on severity of impairment. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Multiscale approach to equilibrating model polymer melts

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Ali Karimi-Varzaneh, Hossein; Hojdis, Nils

    2016-01-01

    We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed...

  19. Estimating the surface layer refractive index structure constant over snow and sea ice using Monin-Obukhov similarity theory with a mesoscale atmospheric model.

    Science.gov (United States)

    Qing, Chun; Wu, Xiaoqing; Huang, Honghua; Tian, Qiguo; Zhu, Wenyue; Rao, Ruizhong; Li, Xuebin

    2016-09-05

    Since systematic direct measurements of refractive index structure constant ( Cn2) for many climates and seasons are not available, an indirect approach is developed in which Cn2 is estimated from the mesoscale atmospheric model outputs. In previous work, we have presented an approach that a state-of-the-art mesoscale atmospheric model called Weather Research and Forecasting (WRF) model coupled with Monin-Obukhov Similarity (MOS) theory which can be used to estimate surface layer Cn2 over the ocean. Here this paper is focused on surface layer Cn2 over snow and sea ice, which is the extending of estimating surface layer Cn2 utilizing WRF model for ground-based optical application requirements. This powerful approach is validated against the corresponding 9-day Cn2 data from a field campaign of the 30th Chinese National Antarctic Research Expedition (CHINARE). We employ several statistical operators to assess how this approach performs. Besides, we present an independent analysis of this approach performance using the contingency tables. Such a method permits us to provide supplementary key information with respect to statistical operators. These methods make our analysis more robust and permit us to confirm the excellent performances of this approach. The reasonably good agreement in trend and magnitude is found between estimated values and measurements overall, and the estimated Cn2 values are even better than the ones obtained by this approach over the ocean surface layer. The encouraging performance of this approach has a concrete practical implementation of ground-based optical applications over snow and sea ice.

  20. A robust quantitative near infrared modeling approach for blend monitoring.

    Science.gov (United States)

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  1. Application of various FLD modelling approaches

    Science.gov (United States)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  2. Similarities and differences in adult tortoises: a morphological approach and its implication for reproduction and mobility between species

    Directory of Open Access Journals (Sweden)

    Marco A. L. Zuffi

    2007-11-01

    Full Text Available Sexes in Chelonia display marked differences. Sexual size dimorphism (SSD is important in evolutionary biology. Different sexual strategies result in species specific selection. Biometric variation in male and female tortoises of two species is studied. Eighteen biometrics were measured in 75 museum specimens (20 Testudo graeca; 55 T. hermanni. Nine of 18 parameters in T. hermanni and two of 18 in T. graeca were sexually dimorphic. Multivariate analyses (principal component analysis highlighted two components, with bridge length the first and anal divergence the second component. The bridge length can be used to separate sexes and species. Males of both species were most different, whereas females of two species overlapped in body shape measurements. We hypothesise that female similarity could be a by-product of reproductive biology and sexual selection that optimise individual fitness.

  3. Two Echelon Supply Chain Integrated Inventory Model for Similar Products: A Case Study

    Science.gov (United States)

    Parjane, Manoj Baburao; Dabade, Balaji Marutirao; Gulve, Milind Bhaskar

    2017-06-01

    The purpose of this paper is to develop a mathematical model towards minimization of total cost across echelons in a multi-product supply chain environment. The scenario under consideration is a two-echelon supply chain system with one manufacturer, one retailer and M products. The retailer faces independent Poisson demand for each product. The retailer and the manufacturer are closely coupled in the sense that the information about any depletion in the inventory of a product at a retailer's end is immediately available to the manufacturer. Further, stock-out is backordered at the retailer's end. Thus the costs incurred at the retailer's end are the holding costs and the backorder costs. The manufacturer has only one processor which is time shared among the M products. Production changeover from one product to another entails a fixed setup cost and a fixed set up time. Each unit of a product has a production time. Considering the cost components, and assuming transportation time and cost to be negligible, the objective of the study is to minimize the expected total cost considering both the manufacturer and retailer. In the process two aspects are to be defined. Firstly, every time a product is taken up for production, how much of it (production batch size, q) should be produced. Considering a large value of q favors the manufacturer while a small value of q suits the retailers. Secondly, for a given batch size q, at what level of retailer's inventory (production queuing point), the batch size S of a product be taken up for production by the manufacturer. A higher value of S incurs more holding cost whereas a lower value of S increases the chance of backorder. A tradeoff between the holding and backorder cost must be taken into consideration while choosing an optimal value of S. It may be noted that due to multiple products and single processor, a product `taken' up for production may not get the processor immediately, and may have to wait in a queue. The `S

  4. Risk Modelling for Passages in Approach Channel

    Directory of Open Access Journals (Sweden)

    Leszek Smolarek

    2013-01-01

    Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.

  5. A touch-probe path generation method through similarity analysis between the feature vectors in new and old models

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Hye Sung; Lee, Jin Won; Yang, Jeong Sam [Dept. of Industrial Engineering, Ajou University, Suwon (Korea, Republic of)

    2016-10-15

    The On-machine measurement (OMM), which measures a work piece during or after the machining process in the machining center, has the advantage of measuring the work piece directly within the work space without moving it. However, the path generation procedure used to determine the measuring sequence and variables for the complex features of a target work piece has the limitation of requiring time-consuming tasks to generate the measuring points and mostly relies on the proficiency of the on-site engineer. In this study, we propose a touch-probe path generation method using similarity analysis between the feature vectors of three-dimensional (3-D) shapes for the OMM. For the similarity analysis between a new 3-D model and existing 3-D models, we extracted the feature vectors from models that can describe the characteristics of a geometric shape model; then, we applied those feature vectors to a geometric histogram that displays a probability distribution obtained by the similarity analysis algorithm. In addition, we developed a computer-aided inspection planning system that corrects non-applied measuring points that are caused by minute geometry differences between the two models and generates the final touch-probe path.

  6. Accuracy test for link prediction in terms of similarity index: The case of WS and BA models

    Science.gov (United States)

    Ahn, Min-Woo; Jung, Woo-Sung

    2015-07-01

    Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.

  7. Assessing the similarity of mental models of operating room team members and implications for patient safety: a prospective, replicated study.

    Science.gov (United States)

    Nakarada-Kordic, Ivana; Weller, Jennifer M; Webster, Craig S; Cumin, David; Frampton, Christopher; Boyd, Matt; Merry, Alan F

    2016-08-31

    Patient safety depends on effective teamwork. The similarity of team members' mental models - or their shared understanding-regarding clinical tasks is likely to influence the effectiveness of teamwork. Mental models have not been measured in the complex, high-acuity environment of the operating room (OR), where professionals of different backgrounds must work together to achieve the best surgical outcome for each patient. Therefore, we aimed to explore the similarity of mental models of task sequence and of responsibility for task within multidisciplinary OR teams. We developed a computer-based card sorting tool (Momento) to capture the information on mental models in 20 six-person surgical teams, each comprised of three subteams (anaesthesia, surgery, and nursing) for two simulated laparotomies. Team members sorted 20 cards depicting key tasks according to when in the procedure each task should be performed, and which subteam was primarily responsible for each task. Within each OR team and subteam, we conducted pairwise comparisons of scores to arrive at mean similarity scores for each task. Mean similarity score for task sequence was 87 % (range 57-97 %). Mean score for responsibility for task was 70 % (range = 38-100 %), but for half of the tasks was only 51 % (range = 38-69 %). Participants believed their own subteam was primarily responsible for approximately half the tasks in each procedure. We found differences in the mental models of some OR team members about responsibility for and order of certain tasks in an emergency laparotomy. Momento is a tool that could help elucidate and better align the mental models of OR team members about surgical procedures and thereby improve teamwork and outcomes for patients.

  8. A Novel Relevance Feedback Approach Based on Similarity Measure Modification in an X-Ray Image Retrieval System Based on Fuzzy Representation Using Fuzzy Attributed Relational Graph

    Directory of Open Access Journals (Sweden)

    Hossien Pourghassem

    2011-04-01

    Full Text Available Relevance feedback approaches is used to improve the performance of content-based image retrieval systems. In this paper, a novel relevance feedback approach based on similarity measure modification in an X-ray image retrieval system based on fuzzy representation using fuzzy attributed relational graph (FARG is presented. In this approach, optimum weight of each feature in feature vector is calculated using similarity rate between query image and relevant and irrelevant images in user feedback. The calculated weight is used to tune fuzzy graph matching algorithm as a modifier parameter in similarity measure. The standard deviation of the retrieved image features is applied to calculate the optimum weight. The proposed image retrieval system uses a FARG for representation of images, a fuzzy matching graph algorithm as similarity measure and a semantic classifier based on merging scheme for determination of the search space in image database. To evaluate relevance feedback approach in the proposed system, a standard X-ray image database consisting of 10000 images in 57 classes is used. The improvement of the evaluation parameters shows proficiency and efficiency of the proposed system.

  9. Similarities of lean manufacturing approaches implementation in SMEs towards the success: Case study in the automotive component industry

    Directory of Open Access Journals (Sweden)

    Rose A.N.M.

    2017-01-01

    Full Text Available Nowadays, manufacturing companies are striving for a better system like lean manufacturing (LM. The primary objective of LM is to identify and eliminate wastes. LM can be applied successfully in all industries providing a full understanding of lean ingredients i.e. concept, principles, and practices. There are a lot of practices which are necessary to be implemented in order to gain full benefits of LM. However, small and medium enterprises (SMEs are lack of knowledge in LM and facing difficulties to adopt all of the LM principles. Therefore, it is necessary to the researchers to come out with a simple guideline for LM implementation. The objective of this paper is to explore the journey of LM implementation including preliminary, in process and post of LM. This research was conducted through multi-case study research. There were four SMEs and two large companies. The gathered information shows that the preliminary stage of LM implementation is similar to each other including large companies. The result shows SMEs still have a potential to success in LM. This finding might give an opportunity to SMEs to prepare the basis for LM implementation effectively. As a result, SMEs able compete in the competitive global marketplace and strive for world class performance through implementation of LM.

  10. Assessing levels of similarity to a "psychodynamic prototype" in psychodynamic psychotherapy with children: a case study approach (preliminary findings

    Directory of Open Access Journals (Sweden)

    Marina Bento Gastaud

    2015-09-01

    Full Text Available Objective:To analyze the degree of similarity to a "psychodynamic prototype" during the first year of two children's once-weekly psychodynamic psychotherapy.Methods: This study used a longitudinal, descriptive, repeated-measures design based on the systematic case study method. Two male school children (here referred to as Walter and Peter and their therapists took part in the study. All sessions were video and audio recorded. Ten sessions from each case were selected for analysis in this preliminary study. Trained examiners (randomly selected in pairs independently and blindly evaluated each session using the Child Psychotherapy Q-Set (CPQ. Experts in psychodynamic therapy and cognitive behavioral therapy from several countries rated each of the 100 CPQ items with regard to how well it characterized a hypothetical ideal session of either treatment modality. A series of paired t tests comparing analogous adherence scores within each session were conducted.Results:There were no significant correlations between time elapsed and adherence to the prototypes. Walter's treatment adhered to both prototypes and Peter's treatment did not adhere to either prototype.Conclusion:Child psychotherapy theory and practice are not absolutely coincident. Real psychotherapy sessions do not necessarily resemble the ideal prototypes.

  11. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  12. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    Science.gov (United States)

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents

  13. Enriching consumer health vocabulary through mining a social Q&A site: A similarity-based approach.

    Science.gov (United States)

    He, Zhe; Chen, Zhiwei; Oh, Sanghee; Hou, Jinghui; Bian, Jiang

    2017-05-01

    The widely known vocabulary gap between health consumers and healthcare professionals hinders information seeking and health dialogue of consumers on end-user health applications. The Open Access and Collaborative Consumer Health Vocabulary (OAC CHV), which contains health-related terms used by lay consumers, has been created to bridge such a gap. Specifically, the OAC CHV facilitates consumers' health information retrieval by enabling consumer-facing health applications to translate between professional language and consumer friendly language. To keep up with the constantly evolving medical knowledge and language use, new terms need to be identified and added to the OAC CHV. User-generated content on social media, including social question and answer (social Q&A) sites, afford us an enormous opportunity in mining consumer health terms. Existing methods of identifying new consumer terms from text typically use ad-hoc lexical syntactic patterns and human review. Our study extends an existing method by extracting n-grams from a social Q&A textual corpus and representing them with a rich set of contextual and syntactic features. Using K-means clustering, our method, simiTerm, was able to identify terms that are both contextually and syntactically similar to the existing OAC CHV terms. We tested our method on social Q&A corpora on two disease domains: diabetes and cancer. Our method outperformed three baseline ranking methods. A post-hoc qualitative evaluation by human experts further validated that our method can effectively identify meaningful new consumer terms on social Q&A. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Different relationships between temporal phylogenetic turnover and phylogenetic similarity and in two forests were detected by a new null model.

    Science.gov (United States)

    Huang, Jian-Xiong; Zhang, Jian; Shen, Yong; Lian, Ju-yu; Cao, Hong-lin; Ye, Wan-hui; Wu, Lin-fang; Bin, Yue

    2014-01-01

    Ecologists have been monitoring community dynamics with the purpose of understanding the rates and causes of community change. However, there is a lack of monitoring of community dynamics from the perspective of phylogeny. We attempted to understand temporal phylogenetic turnover in a 50 ha tropical forest (Barro Colorado Island, BCI) and a 20 ha subtropical forest (Dinghushan in southern China, DHS). To obtain temporal phylogenetic turnover under random conditions, two null models were used. The first shuffled names of species that are widely used in community phylogenetic analyses. The second simulated demographic processes with careful consideration on the variation in dispersal ability among species and the variations in mortality both among species and among size classes. With the two models, we tested the relationships between temporal phylogenetic turnover and phylogenetic similarity at different spatial scales in the two forests. Results were more consistent with previous findings using the second null model suggesting that the second null model is more appropriate for our purposes. With the second null model, a significantly positive relationship was detected between phylogenetic turnover and phylogenetic similarity in BCI at a 10 m×10 m scale, potentially indicating phylogenetic density dependence. This relationship in DHS was significantly negative at three of five spatial scales. This could indicate abiotic filtering processes for community assembly. Using variation partitioning, we found phylogenetic similarity contributed to variation in temporal phylogenetic turnover in the DHS plot but not in BCI plot. The mechanisms for community assembly in BCI and DHS vary from phylogenetic perspective. Only the second null model detected this difference indicating the importance of choosing a proper null model.

  15. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-box model for the specific dynamics is identified. Similarly, an on-line software sensor for detecting the occurrence of backwater phenomena can be developed by comparing the dynamics of a flow measurement with a nearby level measurement. For treatment plants it is found that grey-box models applied to on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey...

  16. Phoneme Similarity and Confusability

    Science.gov (United States)

    Bailey, T.M.; Hahn, U.

    2005-01-01

    Similarity between component speech sounds influences language processing in numerous ways. Explanation and detailed prediction of linguistic performance consequently requires an understanding of these basic similarities. The research reported in this paper contrasts two broad classes of approach to the issue of phoneme similarity-theoretically…

  17. Mathematical Modeling Approaches in Plant Metabolomics.

    Science.gov (United States)

    Fürtauer, Lisa; Weiszmann, Jakob; Weckwerth, Wolfram; Nägele, Thomas

    2018-01-01

    The experimental analysis of a plant metabolome typically results in a comprehensive and multidimensional data set. To interpret metabolomics data in the context of biochemical regulation and environmental fluctuation, various approaches of mathematical modeling have been developed and have proven useful. In this chapter, a general introduction to mathematical modeling is presented and discussed in context of plant metabolism. A particular focus is laid on the suitability of mathematical approaches to functionally integrate plant metabolomics data in a metabolic network and combine it with other biochemical or physiological parameters.

  18. The effects of gravity on human walking: a new test of the dynamic similarity hypothesis using a predictive model.

    Science.gov (United States)

    Raichlen, David A

    2008-09-01

    The dynamic similarity hypothesis (DSH) suggests that differences in animal locomotor biomechanics are due mostly to differences in size. According to the DSH, when the ratios of inertial to gravitational forces are equal between two animals that differ in size [e.g. at equal Froude numbers, where Froude = velocity2/(gravity x hip height)], their movements can be made similar by multiplying all time durations by one constant, all forces by a second constant and all linear distances by a third constant. The DSH has been generally supported by numerous comparative studies showing that as inertial forces differ (i.e. differences in the centripetal force acting on the animal due to variation in hip heights), animals walk with dynamic similarity. However, humans walking in simulated reduced gravity do not walk with dynamically similar kinematics. The simulated gravity experiments did not completely account for the effects of gravity on all body segments, and the importance of gravity in the DSH requires further examination. This study uses a kinematic model to predict the effects of gravity on human locomotion, taking into account both the effects of gravitational forces on the upper body and on the limbs. Results show that dynamic similarity is maintained in altered gravitational environments. Thus, the DSH does account for differences in the inertial forces governing locomotion (e.g. differences in hip height) as well as differences in the gravitational forces governing locomotion.

  19. A comparative experimental approach to ecotoxicology in shallow-water and deep-sea holothurians suggests similar behavioural responses.

    Science.gov (United States)

    Brown, Alastair; Wright, Roseanna; Mevenkamp, Lisa; Hauton, Chris

    2017-10-01

    Exploration of deep-sea mineral resources is burgeoning, raising concerns regarding ecotoxicological impacts on deep-sea fauna. Assessing toxicity in deep-sea species is technologically challenging, which promotes interest in establishing shallow-water ecotoxicological proxy species. However, the effects of temperature and hydrostatic pressure on toxicity, and how adaptation to deep-sea environmental conditions might moderate these effects, are unknown. To address these uncertainties we assessed behavioural and physiological (antioxidant enzyme activity) responses to exposure to copper-spiked artificial sediments in a laboratory experiment using a shallow-water holothurian (Holothuria forskali), and in an in situ experiment using a deep-sea holothurian (Amperima sp.). Both species demonstrated sustained avoidance behaviour, evading contact with contaminated artificial sediment. However, A. sp. demonstrated sustained avoidance of 5mgl -1 copper-contaminated artificial sediment whereas H. forskali demonstrated only temporary avoidance of 5mgl -1 copper-contaminated artificial sediment, suggesting that H. forskali may be more tolerant of metal exposure over 96h. Nonetheless, the acute behavioural response appears consistent between the shallow-water species and the deep-sea species, suggesting that H. forskali may be a suitable ecotoxicological proxy for A. sp. in acute (≤24h) exposures, which may be representative of deep-sea mining impacts. No antioxidant response was observed in either species, which was interpreted to be the consequence of avoiding copper exposure. Although these data suggest that shallow-water taxa may be suitable ecotoxicological proxies for deep-sea taxa, differences in methodological and analytical approaches, and in sex and reproductive stage of experimental subjects, require caution in assessing the suitability of H. forskali as an ecotoxicological proxy for A. sp. Nonetheless, avoidance behaviour may have bioenergetic consequences that

  20. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  1. Stochastic approaches to inflation model building

    International Nuclear Information System (INIS)

    Ramirez, Erandy; Liddle, Andrew R.

    2005-01-01

    While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered

  2. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  3. Assessing intrinsic and specific vulnerability models ability to indicate groundwater vulnerability to groups of similar pesticides: A comparative study

    Science.gov (United States)

    Douglas, Steven; Dixon, Barnali; Griffin, Dale W.

    2018-01-01

    With continued population growth and increasing use of fresh groundwater resources, protection of this valuable resource is critical. A cost effective means to assess risk of groundwater contamination potential will provide a useful tool to protect these resources. Integrating geospatial methods offers a means to quantify the risk of contaminant potential in cost effective and spatially explicit ways. This research was designed to compare the ability of intrinsic (DRASTIC) and specific (Attenuation Factor; AF) vulnerability models to indicate groundwater vulnerability areas by comparing model results to the presence of pesticides from groundwater sample datasets. A logistic regression was used to assess the relationship between the environmental variables and the presence or absence of pesticides within regions of varying vulnerability. According to the DRASTIC model, more than 20% of the study area is very highly vulnerable. Approximately 30% is very highly vulnerable according to the AF model. When groundwater concentrations of individual pesticides were compared to model predictions, the results were mixed. Model predictability improved when concentrations of the group of similar pesticides were compared to model results. Compared to the DRASTIC model, the AF model more accurately predicts the distribution of the number of contaminated wells within each vulnerability class.

  4. Applications of Analytical Self-Similar Solutions of Reynolds-Averaged Models for Instability-Induced Turbulent Mixing

    Science.gov (United States)

    Hartland, Tucker; Schilling, Oleg

    2017-11-01

    Analytical self-similar solutions to several families of single- and two-scale, eddy viscosity and Reynolds stress turbulence models are presented for Rayleigh-Taylor, Richtmyer-Meshkov, and Kelvin-Helmholtz instability-induced turbulent mixing. The use of algebraic relationships between model coefficients and physical observables (e.g., experimental growth rates) following from the self-similar solutions to calibrate a member of a given family of turbulence models is shown. It is demonstrated numerically that the algebraic relations accurately predict the value and variation of physical outputs of a Reynolds-averaged simulation in flow regimes that are consistent with the simplifying assumptions used to derive the solutions. The use of experimental and numerical simulation data on Reynolds stress anisotropy ratios to calibrate a Reynolds stress model is briefly illustrated. The implications of the analytical solutions for future Reynolds-averaged modeling of hydrodynamic instability-induced mixing are briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. A Conceptual Modeling Approach for OLAP Personalization

    Science.gov (United States)

    Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan

    Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.

  6. Variational approach to chiral quark models

    Energy Technology Data Exchange (ETDEWEB)

    Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira

    1987-03-01

    A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation.

  7. A variational approach to chiral quark models

    International Nuclear Information System (INIS)

    Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira.

    1987-01-01

    A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation. (author)

  8. A study of the predictive model on the user reaction time using the information amount and similarity

    International Nuclear Information System (INIS)

    Lee, Sungjin; Heo, Gyunyoung; Chang, S.H.

    2004-01-01

    Human operations through a user interface are divided into two types. The one is the single operation that is performed on a static interface. The other is the sequential operation that achieves a goal by handling several displays through operator's navigation in the crt-based console. Sequential operation has similar meaning with continuous task. Most operations in recently developed computer applications correspond to the sequential operation, and the single operation can be considered as a part of the sequential operation. In the area of HCI (human computer interaction) evaluation, the Hick-Hyman law counts as the most powerful theory. The most important factor in the equation of Hick-Hyman law about choice reaction time is the quantified amount of information conveyed by a statement, stimulus, or event. Generally, we can expect that if there are some similarities between a series of interfaces, human operator is able to use his attention resource effectively. That is the performance of human operator is increased by the similarity. The similarity may be able to affect the allocation of attention resource based on separate STSS (short-term sensory store) and long-term memory. There are theories related with this concept, which are task switching paradigm and the law of practice. However, it is not easy to explain the human operator performance with only the similarity or the information amount. There are few theories to explain the performance with the combination of the similarity and the information amount. The objective of this paper is to purpose and validate the quantitative and predictive model on the user reaction time in CRT-based displays. Another objective is to validate various theories related with human cognition and perception, which are Hick-Hyman law and the law of practice as representative theories. (author)

  9. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  10. A hybrid modeling approach for option pricing

    Science.gov (United States)

    Hajizadeh, Ehsan; Seifi, Abbas

    2011-11-01

    The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.

  11. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  12. Nonperturbative approach to the attractive Hubbard model

    International Nuclear Information System (INIS)

    Allen, S.; Tremblay, A.-M. S.

    2001-01-01

    A nonperturbative approach to the single-band attractive Hubbard model is presented in the general context of functional-derivative approaches to many-body theories. As in previous work on the repulsive model, the first step is based on a local-field-type ansatz, on enforcement of the Pauli principle and a number of crucial sumrules. The Mermin-Wagner theorem in two dimensions is automatically satisfied. At this level, two-particle self-consistency has been achieved. In the second step of the approximation, an improved expression for the self-energy is obtained by using the results of the first step in an exact expression for the self-energy, where the high- and low-frequency behaviors appear separately. The result is a cooperon-like formula. The required vertex corrections are included in this self-energy expression, as required by the absence of a Migdal theorem for this problem. Other approaches to the attractive Hubbard model are critically compared. Physical consequences of the present approach and agreement with Monte Carlo simulations are demonstrated in the accompanying paper (following this one)

  13. General relativistic self-similar waves that induce an anomalous acceleration into the standard model of cosmology

    CERN Document Server

    Smoller, Joel

    2012-01-01

    We prove that the Einstein equations in Standard Schwarzschild Coordinates close to form a system of three ordinary differential equations for a family of spherically symmetric, self-similar expansion waves, and the critical ($k=0$) Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology (FRW), is embedded as a single point in this family. Removing a scaling law and imposing regularity at the center, we prove that the family reduces to an implicitly defined one parameter family of distinct spacetimes determined by the value of a new {\\it acceleration parameter} $a$, such that $a=1$ corresponds to FRW. We prove that all self-similar spacetimes in the family are distinct from the non-critical $k\

  14. Visual Similarity of Words Alone Can Modulate Hemispheric Lateralization in Visual Word Recognition: Evidence From Modeling Chinese Character Recognition.

    Science.gov (United States)

    Hsiao, Janet H; Cheung, Kit

    2016-03-01

    In Chinese orthography, the most common character structure consists of a semantic radical on the left and a phonetic radical on the right (SP characters); the minority, opposite arrangement also exists (PS characters). Recent studies showed that SP character processing is more left hemisphere (LH) lateralized than PS character processing. Nevertheless, it remains unclear whether this is due to phonetic radical position or character type frequency. Through computational modeling with artificial lexicons, in which we implement a theory of hemispheric asymmetry in perception but do not assume phonological processing being LH lateralized, we show that the difference in character type frequency alone is sufficient to exhibit the effect that the dominant type has a stronger LH lateralization than the minority type. This effect is due to higher visual similarity among characters in the dominant type than the minority type, demonstrating the modulation of visual similarity of words on hemispheric lateralization. Copyright © 2015 Cognitive Science Society, Inc.

  15. Quasirelativistic quark model in quasipotential approach

    CERN Document Server

    Matveev, V A; Savrin, V I; Sissakian, A N

    2002-01-01

    The relativistic particles interaction is described within the frames of quasipotential approach. The presentation is based on the so called covariant simultaneous formulation of the quantum field theory, where by the theory is considered on the spatial-like three-dimensional hypersurface in the Minkowski space. Special attention is paid to the methods of plotting various quasipotentials as well as to the applications of the quasipotential approach to describing the characteristics of the relativistic particles interaction in the quark models, namely: the hadrons elastic scattering amplitudes, the mass spectra and widths mesons decays, the cross sections of the deep inelastic leptons scattering on the hadrons

  16. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)

    2015-04-15

    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.

  17. METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Gorbenkova Elena Vladimirovna

    2017-10-01

    Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of

  18. A new approach for developing adjoint models

    Science.gov (United States)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and

  19. Eutrophication Modeling Using Variable Chlorophyll Approach

    International Nuclear Information System (INIS)

    Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.

    2016-01-01

    In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.

  20. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  1. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  2. MODELS OF TECHNOLOGY ADOPTION: AN INTEGRATIVE APPROACH

    Directory of Open Access Journals (Sweden)

    Andrei OGREZEANU

    2015-06-01

    Full Text Available The interdisciplinary study of information technology adoption has developed rapidly over the last 30 years. Various theoretical models have been developed and applied such as: the Technology Acceptance Model (TAM, Innovation Diffusion Theory (IDT, Theory of Planned Behavior (TPB, etc. The result of these many years of research is thousands of contributions to the field, which, however, remain highly fragmented. This paper develops a theoretical model of technology adoption by integrating major theories in the field: primarily IDT, TAM, and TPB. To do so while avoiding mess, an approach that goes back to basics in independent variable type’s development is proposed; emphasizing: 1 the logic of classification, and 2 psychological mechanisms behind variable types. Once developed these types are then populated with variables originating in empirical research. Conclusions are developed on which types are underpopulated and present potential for future research. I end with a set of methodological recommendations for future application of the model.

  3. Interfacial Fluid Mechanics A Mathematical Modeling Approach

    CERN Document Server

    Ajaev, Vladimir S

    2012-01-01

    Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail.  Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also:  Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...

  4. Continuum modeling an approach through practical examples

    CERN Document Server

    Muntean, Adrian

    2015-01-01

    This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.

  5. Global Environmental Change: An integrated modelling approach

    International Nuclear Information System (INIS)

    Den Elzen, M.

    1993-01-01

    Two major global environmental problems are dealt with: climate change and stratospheric ozone depletion (and their mutual interactions), briefly surveyed in part 1. In Part 2 a brief description of the integrated modelling framework IMAGE 1.6 is given. Some specific parts of the model are described in more detail in other Chapters, e.g. the carbon cycle model, the atmospheric chemistry model, the halocarbon model, and the UV-B impact model. In Part 3 an uncertainty analysis of climate change and stratospheric ozone depletion is presented (Chapter 4). Chapter 5 briefly reviews the social and economic uncertainties implied by future greenhouse gas emissions. Chapters 6 and 7 describe a model and sensitivity analysis pertaining to the scientific uncertainties and/or lacunae in the sources and sinks of methane and carbon dioxide, and their biogeochemical feedback processes. Chapter 8 presents an uncertainty and sensitivity analysis of the carbon cycle model, the halocarbon model, and the IMAGE model 1.6 as a whole. Part 4 presents the risk assessment methodology as applied to the problems of climate change and stratospheric ozone depletion more specifically. In Chapter 10, this methodology is used as a means with which to asses current ozone policy and a wide range of halocarbon policies. Chapter 11 presents and evaluates the simulated globally-averaged temperature and sea level rise (indicators) for the IPCC-1990 and 1992 scenarios, concluding with a Low Risk scenario, which would meet the climate targets. Chapter 12 discusses the impact of sea level rise on the frequency of the Dutch coastal defence system (indicator) for the IPCC-1990 scenarios. Chapter 13 presents projections of mortality rates due to stratospheric ozone depletion based on model simulations employing the UV-B chain model for a number of halocarbon policies. Chapter 14 presents an approach for allocating future emissions of CO 2 among regions. (Abstract Truncated)

  6. Processes of Similarity Judgment

    Science.gov (United States)

    Larkey, Levi B.; Markman, Arthur B.

    2005-01-01

    Similarity underlies fundamental cognitive capabilities such as memory, categorization, decision making, problem solving, and reasoning. Although recent approaches to similarity appreciate the structure of mental representations, they differ in the processes posited to operate over these representations. We present an experiment that…

  7. Studies of a general flat space/boson star transition model in a box through a language similar to holographic superconductors

    Science.gov (United States)

    Peng, Yan

    2017-07-01

    We study a general flat space/boson star transition model in quasi-local ensemble through approaches familiar from holographic superconductor theories. We manage to find a parameter ψ 2, which is proved to be useful in disclosing properties of phase transitions. In this work, we explore effects of the scalar mass, scalar charge and Stückelberg mechanism on the critical phase transition points and the order of transitions mainly from behaviors of the parameter ψ 2. We mention that properties of transitions in quasi-local gravity are strikingly similar to those in holographic superconductor models. We also obtain an analytical relation ψ 2 ∝ ( μ - μ c )1/2, which also holds for the condensed scalar operator in the holographic insulator/superconductor system in accordance with mean field theories.

  8. Crime Modeling using Spatial Regression Approach

    Science.gov (United States)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  9. From undefined red smear cheese consortia to minimal model communities both exhibiting similar anti-listerial activity on a cheese-like matrix.

    Science.gov (United States)

    Imran, M; Desmasures, N; Vernoux, J-P

    2010-12-01

    Starting from one undefined cheese smear consortium exhibiting anti-listerial activity (signal) at 15 °C, 50 yeasts and 39 bacteria were identified by partial rDNA sequencing. Construction of microbial communities was done either by addition or by erosion approach with the aim to obtain minimal communities having similar signal to that of the initial smear. The signal of these microbial communities was monitored in cheese microcosm for 14 days under ripening conditions. In the addition scheme, strains having significant signals were mixed step by step. Five-member communities, obtained by addition of a Gram negative bacterium to two yeasts and two Gram positive bacteria, enhanced the signal dramatically contrary to six-member communities including two Gram negative bacteria. In the erosion approach, a progressive reduction of 89 initial strains was performed. While intermediate communities (89, 44 and 22 members) exhibited a lower signal than initial smear consortium, eleven- and six-member communities gave a signal almost as efficient. It was noteworthy that the final minimal model communities obtained by erosion and addition approaches both had anti-listerial activity while consisting of different strains. In conclusion, some minimal model communities can have higher anti-listerial effectiveness than individual strains or the initial 89 micro-organisms from smear. Thus, microbial interactions are involved in the production and modulation of anti-listerial signals in cheese surface communities. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Consequences of team charter quality: Teamwork mental model similarity and team viability in engineering design student teams

    Science.gov (United States)

    Conway Hughston, Veronica

    Since 1996 ABET has mandated that undergraduate engineering degree granting institutions focus on learning outcomes such as professional skills (i.e. solving unstructured problems and working in teams). As a result, engineering curricula were restructured to include team based learning---including team charters. Team charters were diffused into engineering education as one of many instructional activities to meet the ABET accreditation mandates. However, the implementation and execution of team charters into engineering team based classes has been inconsistent and accepted without empirical evidence of the consequences. The purpose of the current study was to investigate team effectiveness, operationalized as team viability, as an outcome of team charter implementation in an undergraduate engineering team based design course. Two research questions were the focus of the study: a) What is the relationship between team charter quality and viability in engineering student teams, and b) What is the relationship among team charter quality, teamwork mental model similarity, and viability in engineering student teams? Thirty-eight intact teams, 23 treatment and 15 comparison, participated in the investigation. Treatment teams attended a team charter lecture, and completed a team charter homework assignment. Each team charter was assessed and assigned a quality score. Comparison teams did not join the lecture, and were not asked to create a team charter. All teams completed each data collection phase: a) similarity rating pretest; b) similarity posttest; and c) team viability survey. Findings indicate that team viability was higher in teams that attended the lecture and completed the charter assignment. Teams with higher quality team charter scores reported higher levels of team viability than teams with lower quality charter scores. Lastly, no evidence was found to support teamwork mental model similarity as a partial mediator of the team charter quality on team viability

  11. Simplification and Shift in Cognition of Political Difference: Applying the Geometric Modeling to the Analysis of Semantic Similarity Judgment

    Science.gov (United States)

    Kato, Junko; Okada, Kensuke

    2011-01-01

    Perceiving differences by means of spatial analogies is intrinsic to human cognition. Multi-dimensional scaling (MDS) analysis based on Minkowski geometry has been used primarily on data on sensory similarity judgments, leaving judgments on abstractive differences unanalyzed. Indeed, analysts have failed to find appropriate experimental or real-life data in this regard. Our MDS analysis used survey data on political scientists' judgments of the similarities and differences between political positions expressed in terms of distance. Both distance smoothing and majorization techniques were applied to a three-way dataset of similarity judgments provided by at least seven experts on at least five parties' positions on at least seven policies (i.e., originally yielding 245 dimensions) to substantially reduce the risk of local minima. The analysis found two dimensions, which were sufficient for mapping differences, and fit the city-block dimensions better than the Euclidean metric in all datasets obtained from 13 countries. Most city-block dimensions were highly correlated with the simplified criterion (i.e., the left–right ideology) for differences that are actually used in real politics. The isometry of the city-block and dominance metrics in two-dimensional space carries further implications. More specifically, individuals may pay attention to two dimensions (if represented in the city-block metric) or focus on a single dimension (if represented in the dominance metric) when judging differences between the same objects. Switching between metrics may be expected to occur during cognitive processing as frequently as the apparent discontinuities and shifts in human attention that may underlie changing judgments in real situations occur. Consequently, the result has extended strong support for the validity of the geometric models to represent an important social cognition, i.e., the one of political differences, which is deeply rooted in human nature. PMID:21673959

  12. A nationwide modelling approach to decommissioning - 16182

    International Nuclear Information System (INIS)

    Kelly, Bernard; Lowe, Andy; Mort, Paul

    2009-01-01

    In this paper we describe a proposed UK national approach to modelling decommissioning. For the first time, we shall have an insight into optimizing the safety and efficiency of a national decommissioning strategy. To do this we use the General Case Integrated Waste Algorithm (GIA), a universal model of decommissioning nuclear plant, power plant, waste arisings and the associated knowledge capture. The model scales from individual items of plant through cells, groups of cells, buildings, whole sites and then on up to a national scale. We describe the national vision for GIA which can be broken down into three levels: 1) the capture of the chronological order of activities that an experienced decommissioner would use to decommission any nuclear facility anywhere in the world - this is Level 1 of GIA; 2) the construction of an Operational Research (OR) model based on Level 1 to allow rapid what if scenarios to be tested quickly (Level 2); 3) the construction of a state of the art knowledge capture capability that allows future generations to learn from our current decommissioning experience (Level 3). We show the progress to date in developing GIA in levels 1 and 2. As part of level 1, GIA has assisted in the development of an IMechE professional decommissioning qualification. Furthermore, we describe GIA as the basis of a UK-Owned database of decommissioning norms for such things as costs, productivity, durations etc. From level 2, we report on a pilot study that has successfully tested the basic principles for the OR numerical simulation of the algorithm. We then highlight the advantages of applying the OR modelling approach nationally. In essence, a series of 'what if...' scenarios can be tested that will improve the safety and efficiency of decommissioning. (authors)

  13. Modeling in transport phenomena a conceptual approach

    CERN Document Server

    Tosun, Ismail

    2007-01-01

    Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to

  14. Nuclear physics for applications. A model approach

    International Nuclear Information System (INIS)

    Prussin, S.G.

    2007-01-01

    Written by a researcher and teacher with experience at top institutes in the US and Europe, this textbook provides advanced undergraduates minoring in physics with working knowledge of the principles of nuclear physics. Simplifying models and approaches reveal the essence of the principles involved, with the mathematical and quantum mechanical background integrated in the text where it is needed and not relegated to the appendices. The practicality of the book is enhanced by numerous end-of-chapter problems and solutions available on the Wiley homepage. (orig.)

  15. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  16. Function Modelling Of The Market And Assessing The Degree Of Similarity Between Real Properties - Dependent Or Independent Procedures In The Process Of Office Property Valuation

    Directory of Open Access Journals (Sweden)

    Barańska Anna

    2015-09-01

    Full Text Available Referring to the developed and presented in previous publications (e.g. Barańska 2011 two-stage algorithm for real estate valuation, this article addresses the problem of the relationship between the two stages of the algorithm. An essential part of the first stage is the multi-dimensional function modelling of the real estate market. As a result of selecting the model best fitted to the market data, in which the dependent variable is always the price of a real property, a set of market attributes is obtained, which in this model are considered to be price-determining. In the second stage, from the collection of real estate which served as a database in the process of estimating model parameters, the selected objects are those which are most similar to the one subject to valuation and form the basis for predicting the final value of the property being valued. Assessing the degree of similarity between real properties can be carried out based on the full spectrum of real estate attributes that potentially affect their value and which it is possible to gather information about, or only on the basis of those attributes which were considered to be price-determining in function modelling. It can also be performed by various methods. This article has examined the effect of various approaches on the final value of the property obtained using the two-stage prediction. In order fulfill the study aim precisely as possible, the results of each calculation step of the algorithm have been investigated in detail. Each of them points to the independence of the two procedures.

  17. Pulmonary parenchyma segmentation in thin CT image sequences with spectral clustering and geodesic active contour model based on similarity

    Science.gov (United States)

    He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan

    2017-07-01

    While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.

  18. A novel approach to pipeline tensioner modeling

    Energy Technology Data Exchange (ETDEWEB)

    O' Grady, Robert; Ilie, Daniel; Lane, Michael [MCS Software Division, Galway (Ireland)

    2009-07-01

    As subsea pipeline developments continue to move into deep and ultra-deep water locations, there is an increasing need for the accurate prediction of expected pipeline fatigue life. A significant factor that must be considered as part of this process is the fatigue damage sustained by the pipeline during installation. The magnitude of this installation-related damage is governed by a number of different agents, one of which is the dynamic behavior of the tensioner systems during pipe-laying operations. There are a variety of traditional finite element methods for representing dynamic tensioner behavior. These existing methods, while basic in nature, have been proven to provide adequate forecasts in terms of the dynamic variation in typical installation parameters such as top tension and sagbend/overbend strain. However due to the simplicity of these current approaches, some of them tend to over-estimate the frequency of tensioner pay out/in under dynamic loading. This excessive level of pay out/in motion results in the prediction of additional stress cycles at certain roller beds, which in turn leads to the prediction of unrealistic fatigue damage to the pipeline. This unwarranted fatigue damage then equates to an over-conservative value for the accumulated damage experienced by a pipeline weld during installation, and so leads to a reduction in the estimated fatigue life for the pipeline. This paper describes a novel approach to tensioner modeling which allows for greater control over the velocity of dynamic tensioner pay out/in and so provides a more accurate estimation of fatigue damage experienced by the pipeline during installation. The paper reports on a case study, as outlined in the proceeding section, in which a comparison is made between results from this new tensioner model and from a more conventional approach. The comparison considers typical installation parameters as well as an in-depth look at the predicted fatigue damage for the two methods

  19. The semantic similarity ensemble

    Directory of Open Access Journals (Sweden)

    Andrea Ballatore

    2013-12-01

    Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.

  20. Similarity Measure of Graphs

    Directory of Open Access Journals (Sweden)

    Amine Labriji

    2017-07-01

    Full Text Available The topic of identifying the similarity of graphs was considered as highly recommended research field in the Web semantic, artificial intelligence, the shape recognition and information research. One of the fundamental problems of graph databases is finding similar graphs to a graph query. Existing approaches dealing with this problem are usually based on the nodes and arcs of the two graphs, regardless of parental semantic links. For instance, a common connection is not identified as being part of the similarity of two graphs in cases like two graphs without common concepts, the measure of similarity based on the union of two graphs, or the one based on the notion of maximum common sub-graph (SCM, or the distance of edition of graphs. This leads to an inadequate situation in the context of information research. To overcome this problem, we suggest a new measure of similarity between graphs, based on the similarity measure of Wu and Palmer. We have shown that this new measure satisfies the properties of a measure of similarities and we applied this new measure on examples. The results show that our measure provides a run time with a gain of time compared to existing approaches. In addition, we compared the relevance of the similarity values obtained, it appears that this new graphs measure is advantageous and  offers a contribution to solving the problem mentioned above.

  1. Mechanical instability and titanium particles induce similar transcriptomic changes in a rat model for periprosthetic osteolysis and aseptic loosening

    Directory of Open Access Journals (Sweden)

    Mehdi Amirhosseini

    2017-12-01

    Full Text Available Wear debris particles released from prosthetic bearing surfaces and mechanical instability of implants are two main causes of periprosthetic osteolysis. While particle-induced loosening has been studied extensively, mechanisms through which mechanical factors lead to implant loosening have been less investigated. This study compares the transcriptional profiles associated with osteolysis in a rat model for aseptic loosening, induced by either mechanical instability or titanium particles. Rats were exposed to mechanical instability or titanium particles. After 15 min, 3, 48 or 120 h from start of the stimulation, gene expression changes in periprosthetic bone tissue was determined by microarray analysis. Microarray data were analyzed by PANTHER Gene List Analysis tool and Ingenuity Pathway Analysis (IPA. Both types of osteolytic stimulation led to gene regulation in comparison to unstimulated controls after 3, 48 or 120 h. However, when mechanical instability was compared to titanium particles, no gene showed a statistically significant difference (fold change ≥ ±1.5 and adjusted p-value ≤ 0.05 at any time point. There was a remarkable similarity in numbers and functional classification of regulated genes. Pathway analysis showed several inflammatory pathways activated by both stimuli, including Acute Phase Response signaling, IL-6 signaling and Oncostatin M signaling. Quantitative PCR confirmed the changes in expression of key genes involved in osteolysis observed by global transcriptomics. Inflammatory mediators including interleukin (IL-6, IL-1β, chemokine (C-C motif ligand (CCL2, prostaglandin-endoperoxide synthase (Ptgs2 and leukemia inhibitory factor (LIF showed strong upregulation, as assessed by both microarray and qPCR. By investigating genome-wide expression changes we show that, despite the different nature of mechanical implant instability and titanium particles, osteolysis seems to be induced through similar biological

  2. Numerical modelling of carbonate platforms and reefs: approaches and opportunities

    Energy Technology Data Exchange (ETDEWEB)

    Dalmasso, H.; Montaggioni, L.F.; Floquet, M. [Universite de Provence, Marseille (France). Centre de Sedimentologie-Palaeontologie; Bosence, D. [Royal Holloway University of London, Egham (United Kingdom). Dept. of Geology

    2001-07-01

    This paper compares different computing procedures that have been utilized in simulating shallow-water carbonate platform development. Based on our geological knowledge we can usually give a rather accurate qualitative description of the mechanisms controlling geological phenomena. Further description requires the use of computer stratigraphic simulation models that allow quantitative evaluation and understanding of the complex interactions of sedimentary depositional carbonate systems. The roles of modelling include: (1) encouraging accuracy and precision in data collection and process interpretation (Watney et al., 1999); (2) providing a means to quantitatively test interpretations concerning the control of various mechanisms on producing sedimentary packages; (3) predicting or extrapolating results into areas of limited control; (4) gaining new insights regarding the interaction of parameters; (5) helping focus on future studies to resolve specific problems. This paper addresses two main questions, namely: (1) What are the advantages and disadvantages of various types of models? (2) How well do models perform? In this paper we compare and discuss the application of five numerical models: CARBONATE (Bosence and Waltham, 1990), FUZZIM (Nordlund, 1999), CARBPLAT (Bosscher, 1992), DYNACARB (Li et al., 1993), PHIL (Bowman, 1997) and SEDPAK (Kendall et al., 1991). The comparison, testing and evaluation of these models allow one to gain a better knowledge and understanding of controlling parameters of carbonate platform development, which are necessary for modelling. Evaluating numerical models, critically comparing results from models using different approaches, and pushing experimental tests to their limits, provide an effective vehicle to improve and develop new numerical models. A main feature of this paper is to closely compare the performance between two numerical models: a forward model (CARBONATE) and a fuzzy logic model (FUZZIM). These two models use common

  3. Approaches and models of intercultural education

    Directory of Open Access Journals (Sweden)

    Iván Manuel Sánchez Fontalvo

    2013-10-01

    Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.   

  4. Transport modeling: An artificial immune system approach

    Directory of Open Access Journals (Sweden)

    Teodorović Dušan

    2006-01-01

    Full Text Available This paper describes an artificial immune system approach (AIS to modeling time-dependent (dynamic, real time transportation phenomenon characterized by uncertainty. The basic idea behind this research is to develop the Artificial Immune System, which generates a set of antibodies (decisions, control actions that altogether can successfully cover a wide range of potential situations. The proposed artificial immune system develops antibodies (the best control strategies for different antigens (different traffic "scenarios". This task is performed using some of the optimization or heuristics techniques. Then a set of antibodies is combined to create Artificial Immune System. The developed Artificial Immune transportation systems are able to generalize, adapt, and learn based on new knowledge and new information. Applications of the systems are considered for airline yield management, the stochastic vehicle routing, and real-time traffic control at the isolated intersection. The preliminary research results are very promising.

  5. System approach to modeling of industrial technologies

    Science.gov (United States)

    Toropov, V. S.; Toropov, E. S.

    2018-03-01

    The authors presented a system of methods for modeling and improving industrial technologies. The system consists of information and software. The information part is structured information about industrial technologies. The structure has its template. The template has several essential categories used to improve the technological process and eliminate weaknesses in the process chain. The base category is the physical effect that takes place when the technical process proceeds. The programming part of the system can apply various methods of creative search to the content stored in the information part of the system. These methods pay particular attention to energy transformations in the technological process. The system application will allow us to systematize the approach to improving technologies and obtaining new technical solutions.

  6. Using sparse LU factorisation to precondition GMRES for a family of similarly structured matrices arising from process modelling

    Energy Technology Data Exchange (ETDEWEB)

    Brooking, C. [Univ. of Bath (United Kingdom)

    1996-12-31

    Process engineering software is used to simulate the operation of large chemical plants. Such simulations are used for a variety of tasks, including operator training. For the software to be of practical use for this, dynamic simulations need to run in real-time. The models that the simulation is based upon are written in terms of Differential Algebraic Equations (DAE`s). In the numerical time-integration of systems of DAE`s using an implicit method such as backward Euler, the solution of nonlinear systems is required at each integration point. When solved using Newton`s method, this leads to the repeated solution of nonsymmetric sparse linear systems. These systems range in size from 500 to 20,000 variables. A typical integration may require around 3000 timesteps, and if 4 Newton iterates were needed on each time step, then this means approximately 12,000 linear systems must be solved. The matrices produced by the simulations have a similar sparsity pattern throughout the integration. They are also severely ill-conditioned, and have widely-scattered spectra.

  7. A novel approach to multihazard modeling and simulation.

    Science.gov (United States)

    Smith, Silas W; Portelli, Ian; Narzisi, Giuseppe; Nelson, Lewis S; Menges, Fabian; Rekow, E Dianne; Mincer, Joshua S; Mishra, Bhubaneswar; Goldfrank, Lewis R

    2009-06-01

    To develop and apply a novel modeling approach to support medical and public health disaster planning and response using a sarin release scenario in a metropolitan environment. An agent-based disaster simulation model was developed incorporating the principles of dose response, surge response, and psychosocial characteristics superimposed on topographically accurate geographic information system architecture. The modeling scenarios involved passive and active releases of sarin in multiple transportation hubs in a metropolitan city. Parameters evaluated included emergency medical services, hospital surge capacity (including implementation of disaster plan), and behavioral and psychosocial characteristics of the victims. In passive sarin release scenarios of 5 to 15 L, mortality increased nonlinearly from 0.13% to 8.69%, reaching 55.4% with active dispersion, reflecting higher initial doses. Cumulative mortality rates from releases in 1 to 3 major transportation hubs similarly increased nonlinearly as a function of dose and systemic stress. The increase in mortality rate was most pronounced in the 80% to 100% emergency department occupancy range, analogous to the previously observed queuing phenomenon. Effective implementation of hospital disaster plans decreased mortality and injury severity. Decreasing ambulance response time and increasing available responding units reduced mortality among potentially salvageable patients. Adverse psychosocial characteristics (excess worry and low compliance) increased demands on health care resources. Transfer to alternative urban sites was possible. An agent-based modeling approach provides a mechanism to assess complex individual and systemwide effects in rare events.

  8. Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.

    Science.gov (United States)

    Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J

    2016-01-01

    Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  9. ECOMOD - An ecological approach to radioecological modelling

    International Nuclear Information System (INIS)

    Sazykina, Tatiana G.

    2000-01-01

    A unified methodology is proposed to simulate the dynamic processes of radionuclide migration in aquatic food chains in parallel with their stable analogue elements. The distinguishing feature of the unified radioecological/ecological approach is the description of radionuclide migration along with dynamic equations for the ecosystem. The ability of the methodology to predict the results of radioecological experiments is demonstrated by an example of radionuclide (iron group) accumulation by a laboratory culture of the algae Platymonas viridis. Based on the unified methodology, the 'ECOMOD' radioecological model was developed to simulate dynamic radioecological processes in aquatic ecosystems. It comprises three basic modules, which are operated as a set of inter-related programs. The 'ECOSYSTEM' module solves non-linear ecological equations, describing the biomass dynamics of essential ecosystem components. The 'RADIONUCLIDE DISTRIBUTION' module calculates the radionuclide distribution in abiotic and biotic components of the aquatic ecosystem. The 'DOSE ASSESSMENT' module calculates doses to aquatic biota and doses to man from aquatic food chains. The application of the ECOMOD model to reconstruct the radionuclide distribution in the Chernobyl Cooling Pond ecosystem in the early period after the accident shows good agreement with observations

  10. Modelling Approach In Islamic Architectural Designs

    Directory of Open Access Journals (Sweden)

    Suhaimi Salleh

    2014-06-01

    Full Text Available Architectural designs contribute as one of the main factors that should be considered in minimizing negative impacts in planning and structural development in buildings such as in mosques. In this paper, the ergonomics perspective is revisited which hence focuses on the conditional factors involving organisational, psychological, social and population as a whole. This paper tries to highlight the functional and architectural integration with ecstatic elements in the form of decorative and ornamental outlay as well as incorporating the building structure such as wall, domes and gates. This paper further focuses the mathematical aspects of the architectural designs such as polar equations and the golden ratio. These designs are modelled into mathematical equations of various forms, while the golden ratio in mosque is verified using two techniques namely, the geometric construction and the numerical method. The exemplary designs are taken from theSabah Bandaraya Mosque in Likas, Kota Kinabalu and the Sarawak State Mosque in Kuching,while the Universiti Malaysia Sabah Mosque is used for the Golden Ratio. Results show thatIslamic architectural buildings and designs have long had mathematical concepts and techniques underlying its foundation, hence, a modelling approach is needed to rejuvenate these Islamic designs.

  11. Multiscale modeling of alloy solidification using a database approach

    Science.gov (United States)

    Tan, Lijian; Zabaras, Nicholas

    2007-11-01

    A two-scale model based on a database approach is presented to investigate alloy solidification. Appropriate assumptions are introduced to describe the behavior of macroscopic temperature, macroscopic concentration, liquid volume fraction and microstructure features. These assumptions lead to a macroscale model with two unknown functions: liquid volume fraction and microstructure features. These functions are computed using information from microscale solutions of selected problems. This work addresses the selection of sample problems relevant to the interested problem and the utilization of data from the microscale solution of the selected sample problems. A computationally efficient model, which is different from the microscale and macroscale models, is utilized to find relevant sample problems. In this work, the computationally efficient model is a sharp interface solidification model of a pure material. Similarities between the sample problems and the problem of interest are explored by assuming that the liquid volume fraction and microstructure features are functions of solution features extracted from the solution of the computationally efficient model. The solution features of the computationally efficient model are selected as the interface velocity and thermal gradient in the liquid at the time the sharp solid-liquid interface passes through. An analytical solution of the computationally efficient model is utilized to select sample problems relevant to solution features obtained at any location of the domain of the problem of interest. The microscale solution of selected sample problems is then utilized to evaluate the two unknown functions (liquid volume fraction and microstructure features) in the macroscale model. The temperature solution of the macroscale model is further used to improve the estimation of the liquid volume fraction and microstructure features. Interpolation is utilized in the feature space to greatly reduce the number of required

  12. An effective approach for annotation of protein families with low sequence similarity and conserved motifs: identifying GDSL hydrolases across the plant kingdom.

    Science.gov (United States)

    Vujaklija, Ivan; Bielen, Ana; Paradžik, Tina; Biđin, Siniša; Goldstein, Pavle; Vujaklija, Dušica

    2016-02-18

    The massive accumulation of protein sequences arising from the rapid development of high-throughput sequencing, coupled with automatic annotation, results in high levels of incorrect annotations. In this study, we describe an approach to decrease annotation errors of protein families characterized by low overall sequence similarity. The GDSL lipolytic family comprises proteins with multifunctional properties and high potential for pharmaceutical and industrial applications. The number of proteins assigned to this family has increased rapidly over the last few years. In particular, the natural abundance of GDSL enzymes reported recently in plants indicates that they could be a good source of novel GDSL enzymes. We noticed that a significant proportion of annotated sequences lack specific GDSL motif(s) or catalytic residue(s). Here, we applied motif-based sequence analyses to identify enzymes possessing conserved GDSL motifs in selected proteomes across the plant kingdom. Motif-based HMM scanning (Viterbi decoding-VD and posterior decoding-PD) and the here described PD/VD protocol were successfully applied on 12 selected plant proteomes to identify sequences with GDSL motifs. A significant number of identified GDSL sequences were novel. Moreover, our scanning approach successfully detected protein sequences lacking at least one of the essential motifs (171/820) annotated by Pfam profile search (PfamA) as GDSL. Based on these analyses we provide a curated list of GDSL enzymes from the selected plants. CLANS clustering and phylogenetic analysis helped us to gain a better insight into the evolutionary relationship of all identified GDSL sequences. Three novel GDSL subfamilies as well as unreported variations in GDSL motifs were discovered in this study. In addition, analyses of selected proteomes showed a remarkable expansion of GDSL enzymes in the lycophyte, Selaginella moellendorffii. Finally, we provide a general motif-HMM scanner which is easily accessible through

  13. Mechanical Diagnosis and Therapy has similar effects on pain and disability as ‘wait and see’ and other approaches in people with neck pain: a systematic review

    Directory of Open Access Journals (Sweden)

    Hiroshi Takasaki

    2014-06-01

    Full Text Available Questions: In people with neck pain, does Mechanical Diagnosis and Therapy (MDT reduce pain and disability more than ‘wait and see’? Does MDT reduce pain and disability more than other interventions? Are any differences in effect clinically important? Design: Systematic review of randomised trials with meta-analysis. Participants: People with neck pain. Intervention: MDT. Outcome measures: Pain intensity and disability due to neck pain in the short (< 3 months, intermediate (< 1 year and long term (≥ 1 year. Results: Five trials were included. Most comparisons demonstrated mean differences in effect that favoured MDT over wait-and-see controls or other interventions, although most were statistically non-significant. For pain, all comparisons had a 95% confidence interval (CI with lower limits that were less than 20 on a scale of 0 to 100, which suggests that the difference may not be clinically important. For disability, even the upper limits of the 95% CI were below this threshold, confirming that the differences are not clinically important. In all of the trials, some or all of the treating therapists did not have the highest level of MDT training. Conclusion: The additional benefit of MDT compared with the wait-and-see approach or other therapeutic approaches may not be clinically important in terms of pain intensity and is not clinically important in terms of disability. However, these estimates of the effect of MDT may reflect suboptimal training of the treating therapists. Further research could improve the precision of the estimates and assess whether the extent of training in MDT influences its effect. [Takasaki H, May S (2014 Mechanical Diagnosis and Therapy has similar effects on pain and disability as ‘wait and see’ and other approaches in people with neck pain: a systematic review. Journal of Physiotherapy 60: 78–84].

  14. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  15. Vector-model-supported approach in prostate plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)

    2017-07-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  16. An integrated approach to permeability modeling using micro-models

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)

    2008-10-15

    An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.

  17. Risk communication: a mental models approach

    National Research Council Canada - National Science Library

    Morgan, M. Granger (Millett Granger)

    2002-01-01

    ... information about risks. The procedure uses approaches from risk and decision analysis to identify the most relevant information; it also uses approaches from psychology and communication theory to ensure that its message is understood. This book is written in nontechnical terms, designed to make the approach feasible for anyone willing to try it. It is illustrat...

  18. A Multi-Model Approach for System Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur

    2007-01-01

    A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....

  19. A Markovian approach for modeling packet traffic with long range dependence

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis

    1998-01-01

    -state Markov modulated Poisson processes (MMPPs). We illustrate that a superposition of four two-state MMPPs suffices to model second-order self-similar behavior over several time scales. Our modeling approach allows us to fit to additional descriptors while maintaining the second-order behavior...

  20. Graphene growth process modeling: a physical-statistical approach

    Science.gov (United States)

    Wu, Jian; Huang, Qiang

    2014-09-01

    As a zero-band semiconductor, graphene is an attractive material for a wide variety of applications such as optoelectronics. Among various techniques developed for graphene synthesis, chemical vapor deposition on copper foils shows high potential for producing few-layer and large-area graphene. Since fabrication of high-quality graphene sheets requires the understanding of growth mechanisms, and methods of characterization and control of grain size of graphene flakes, analytical modeling of graphene growth process is therefore essential for controlled fabrication. The graphene growth process starts with randomly nucleated islands that gradually develop into complex shapes, grow in size, and eventually connect together to cover the copper foil. To model this complex process, we develop a physical-statistical approach under the assumption of self-similarity during graphene growth. The growth kinetics is uncovered by separating island shapes from area growth rate. We propose to characterize the area growth velocity using a confined exponential model, which not only has clear physical explanation, but also fits the real data well. For the shape modeling, we develop a parametric shape model which can be well explained by the angular-dependent growth rate. This work can provide useful information for the control and optimization of graphene growth process on Cu foil.

  1. A numerical similarity approach for using retired Current Procedural Terminology (CPT) codes for electronic phenotyping in the Scalable Collaborative Infrastructure for a Learning Health System (SCILHS).

    Science.gov (United States)

    Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N

    2015-12-11

    Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer

  2. [New approaches in pharmacology: numerical modelling and simulation].

    Science.gov (United States)

    Boissel, Jean-Pierre; Cucherat, Michel; Nony, Patrice; Dronne, Marie-Aimée; Kassaï, Behrouz; Chabaud, Sylvie

    2005-01-01

    The complexity of pathophysiological mechanisms is beyond the capabilities of traditional approaches. Many of the decision-making problems in public health, such as initiating mass screening, are complex. Progress in genomics and proteomics, and the resulting extraordinary increase in knowledge with regard to interactions between gene expression, the environment and behaviour, the customisation of risk factors and the need to combine therapies that individually have minimal though well documented efficacy, has led doctors to raise new questions: how to optimise choice and the application of therapeutic strategies at the individual rather than the group level, while taking into account all the available evidence? This is essentially a problem of complexity with dimensions similar to the previous ones: multiple parameters with nonlinear relationships between them, varying time scales that cannot be ignored etc. Numerical modelling and simulation (in silico investigations) have the potential to meet these challenges. Such approaches are considered in drug innovation and development. They require a multidisciplinary approach, and this will involve modification of the way research in pharmacology is conducted.

  3. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  4. A study of the predictive model on the user reaction time using the information amount and its similarity

    International Nuclear Information System (INIS)

    Lee, Sung Jin; Heo, Gyun Young; Chang, Soon Heung

    2004-01-01

    There are lots of studies on the user interface evaluation since it started. Recent studies focus on the contextual information of the user interface. We knew that the user reaction time increases as the amount of information increases. But, the relation between the contextual information and the user reaction time may be unknown. In this study, we proposed the similarity as one of the contextual information. We can expect that the similarity decreases the user reaction time. The goal of this study is to find some correlation about the user reaction time with both the information amount and the similarity. The experiment was performed with 20 participants. The results of experiment demonstrated our proposals

  5. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Science.gov (United States)

    2011-01-01

    Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model) can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing processes. A complete

  6. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Directory of Open Access Journals (Sweden)

    Freire Sergio M

    2011-10-01

    Full Text Available Abstract Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing

  7. Numerical modelling of diesel spray using the Eulerian multiphase approach

    International Nuclear Information System (INIS)

    Vujanović, Milan; Petranović, Zvonimir; Edelbauer, Wilfried; Baleta, Jakov; Duić, Neven

    2015-01-01

    Highlights: • Numerical model for fuel disintegration was presented. • Fuel liquid and vapour were calculated. • Good agreement with experimental data was shown for various combinations of injection and chamber pressure. - Abstract: This research investigates high pressure diesel fuel injection into the combustion chamber by performing computational simulations using the Euler–Eulerian multiphase approach. Six diesel-like conditions were simulated for which the liquid fuel jet was injected into a pressurised inert environment (100% N 2 ) through a 205 μm nozzle hole. The analysis was focused on the liquid jet and vapour penetration, describing spatial and temporal spray evolution. For this purpose, an Eulerian multiphase model was implemented, variations of the sub-model coefficients were performed, and their impact on the spray formation was investigated. The final set of sub-model coefficients was applied to all operating points. Several simulations of high pressure diesel injections (50, 80, and 120 MPa) combined with different chamber pressures (5.4 and 7.2 MPa) were carried out and results were compared to the experimental data. The predicted results share a similar spray cloud shape for all conditions with the different vapour and liquid penetration length. The liquid penetration is shortened with the increase in chamber pressure, whilst the vapour penetration is more pronounced by elevating the injection pressure. Finally, the results showed good agreement when compared to the measured data, and yielded the correct trends for both the liquid and vapour penetrations under different operating conditions

  8. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  9. A Discrete Monetary Economic Growth Model with the MIU Approach

    Directory of Open Access Journals (Sweden)

    Wei-Bin Zhang

    2008-01-01

    Full Text Available This paper proposes an alternative approach to economic growth with money. The production side is the same as the Solow model, the Ramsey model, and the Tobin model. But we deal with behavior of consumers differently from the traditional approaches. The model is influenced by the money-in-the-utility (MIU approach in monetary economics. It provides a mechanism of endogenous saving which the Solow model lacks and avoids the assumption of adding up utility over a period of time upon which the Ramsey approach is based.

  10. Mathematical Modelling Approach in Mathematics Education

    Science.gov (United States)

    Arseven, Ayla

    2015-01-01

    The topic of models and modeling has come to be important for science and mathematics education in recent years. The topic of "Modeling" topic is especially important for examinations such as PISA which is conducted at an international level and measures a student's success in mathematics. Mathematical modeling can be defined as using…

  11. A modeling approach for compounds affecting body composition.

    Science.gov (United States)

    Gennemark, Peter; Jansson-Löfmark, Rasmus; Hyberg, Gina; Wigstrand, Maria; Kakol-Palm, Dorota; Håkansson, Pernilla; Hovdal, Daniel; Brodin, Peter; Fritsch-Fredin, Maria; Antonsson, Madeleine; Ploj, Karolina; Gabrielsson, Johan

    2013-12-01

    Body composition and body mass are pivotal clinical endpoints in studies of welfare diseases. We present a combined effort of established and new mathematical models based on rigorous monitoring of energy intake (EI) and body mass in mice. Specifically, we parameterize a mechanistic turnover model based on the law of energy conservation coupled to a drug mechanism model. Key model variables are fat-free mass (FFM) and fat mass (FM), governed by EI and energy expenditure (EE). An empirical Forbes curve relating FFM to FM was derived experimentally for female C57BL/6 mice. The Forbes curve differs from a previously reported curve for male C57BL/6 mice, and we thoroughly analyse how the choice of Forbes curve impacts model predictions. The drug mechanism function acts on EI or EE, or both. Drug mechanism parameters (two to three parameters) and system parameters (up to six free parameters) could be estimated with good precision (coefficients of variation typically mass and FM changes at different drug provocations using a similar model for man. Surprisingly, model simulations indicate that an increase in EI (e.g. 10 %) was more efficient than an equal lowering of EI. Also, the relative change in body mass and FM is greater in man than in mouse at the same relative change in either EI or EE. We acknowledge that this assumes the same drug mechanism impact across the two species. A set of recommendations regarding the Forbes curve, vehicle control groups, dual action on EI and loss, and translational aspects are discussed. This quantitative approach significantly improves data interpretation, disease system understanding, safety assessment and translation across species.

  12. A Multivariate Approach to Functional Neuro Modeling

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.

    1998-01-01

    by the application of linear and more flexible, nonlinear microscopic regression models to a real-world dataset. The dependency of model performance, as quantified by generalization error, on model flexibility and training set size is demonstrated, leading to the important realization that no uniformly optimal model......, provides the basis for a generalization theoretical framework relating model performance to model complexity and dataset size. Briefly summarized the major topics discussed in the thesis include: - An introduction of the representation of functional datasets by pairs of neuronal activity patterns...... exists. - Model visualization and interpretation techniques. The simplicity of this task for linear models contrasts the difficulties involved when dealing with nonlinear models. Finally, a visualization technique for nonlinear models is proposed. A single observation emerges from the thesis...

  13. Rival approaches to mathematical modelling in immunology

    Science.gov (United States)

    Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.

    2007-08-01

    In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.

  14. A hybrid agent-based approach for modeling microbiological systems.

    Science.gov (United States)

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  15. MRI of Mouse Models for Gliomas Shows Similarities to Humans and Can Be Used to Identify Mice for Preclinical Trials

    Directory of Open Access Journals (Sweden)

    Jason A. Koutcher

    2002-01-01

    Full Text Available Magnetic resonance imaging (MRI has been utilized for screening and detecting brain tumors in mice based upon their imaging characteristics appearance and their pattern of enhancement. Imaging of these tumors reveals many similarities to those observed in humans with identical pathology. Specifically, high-grade murine gliomas have histologic characteristics of glioblastoma multiforme (GBM with contrast enhancement after intravenous administration of gadolinium diethylenetriamine pentaacetic acid (Gd-DTPA, implying disruption of the blood-brain barrier in these tumors. In contrast, low-grade murine oligodendrogliomas do not reveal contrast enhancement, similar to human tumors. MRI can be used to identify mice with brain neoplasms as inclusion criteria in preclinical trials.

  16. Matter of Similarity and Dissimilarity in Multi-Ethnic Society: A Model of Dyadic Cultural Norms Congruence

    OpenAIRE

    Abu Bakar Hassan; Mohamad Bahtiar

    2017-01-01

    Taking this into consideration of diver cultural norms in Malaysian workplace, the propose model explores Malaysia culture and identity with a backdrop of the pushes and pulls of ethnic diversity in a Malaysia. The model seeks to understand relational norm congruence based on multiethnic in Malaysia that will be enable us to identify Malaysia cultural and identity. This is in line with recent call by various interest groups in Malaysia to focus more on model designs that capture contextual an...

  17. Another cat and mouse game: Deciphering the evolution of the SCGB superfamily and exploring the molecular similarity of major cat allergen Fel d 1 and mouse ABP using computational approaches

    Science.gov (United States)

    Pageat, Patrick; Bienboire-Frosini, Cécile

    2018-01-01

    The mammalian secretoglobin (SCGB) superfamily contains functionally diverse members, among which the major cat allergen Fel d 1 and mouse salivary androgen-binding protein (ABP) display similar subunits. We searched for molecular similarities between Fel d 1 and ABP to examine the possibility that they play similar roles. We aimed to i) cluster the evolutionary relationships of the SCGB superfamily; ii) identify divergence patterns, structural overlap, and protein-protein docking between Fel d 1 and ABP dimers; and iii) explore the residual interaction between ABP dimers and steroid binding in chemical communication using computational approaches. We also report that the evolutionary tree of the SCGB superfamily comprises seven unique palm-like clusters, showing the evolutionary pattern and divergence time tree of Fel d 1 with 28 ABP paralogs. Three ABP subunits (A27, BG27, and BG26) share phylogenetic relationships with Fel d 1 chains. The Fel d 1 and ABP subunits show similarities in terms of sequence conservation, identical motifs and binding site clefts. Topologically equivalent positions were visualized through superimposition of ABP A27:BG27 (AB) and ABP A27:BG26 (AG) dimers on a heterodimeric Fel d 1 model. In docking, Fel d 1-ABP dimers exhibit the maximum surface binding ability of AG compared with that of AB dimers and the several polar interactions between ABP dimers with steroids. Hence, cat Fel d 1 is an ABP-like molecule in which monomeric chains 1 and 2 are the equivalent of the ABPA and ABPBG monomers, respectively. These findings suggest that the biological and molecular function of Fel d 1 is similar to that of ABP in chemical communication, possibly via pheromone and/or steroid binding. PMID:29771985

  18. Another cat and mouse game: Deciphering the evolution of the SCGB superfamily and exploring the molecular similarity of major cat allergen Fel d 1 and mouse ABP using computational approaches.

    Science.gov (United States)

    Durairaj, Rajesh; Pageat, Patrick; Bienboire-Frosini, Cécile

    2018-01-01

    The mammalian secretoglobin (SCGB) superfamily contains functionally diverse members, among which the major cat allergen Fel d 1 and mouse salivary androgen-binding protein (ABP) display similar subunits. We searched for molecular similarities between Fel d 1 and ABP to examine the possibility that they play similar roles. We aimed to i) cluster the evolutionary relationships of the SCGB superfamily; ii) identify divergence patterns, structural overlap, and protein-protein docking between Fel d 1 and ABP dimers; and iii) explore the residual interaction between ABP dimers and steroid binding in chemical communication using computational approaches. We also report that the evolutionary tree of the SCGB superfamily comprises seven unique palm-like clusters, showing the evolutionary pattern and divergence time tree of Fel d 1 with 28 ABP paralogs. Three ABP subunits (A27, BG27, and BG26) share phylogenetic relationships with Fel d 1 chains. The Fel d 1 and ABP subunits show similarities in terms of sequence conservation, identical motifs and binding site clefts. Topologically equivalent positions were visualized through superimposition of ABP A27:BG27 (AB) and ABP A27:BG26 (AG) dimers on a heterodimeric Fel d 1 model. In docking, Fel d 1-ABP dimers exhibit the maximum surface binding ability of AG compared with that of AB dimers and the several polar interactions between ABP dimers with steroids. Hence, cat Fel d 1 is an ABP-like molecule in which monomeric chains 1 and 2 are the equivalent of the ABPA and ABPBG monomers, respectively. These findings suggest that the biological and molecular function of Fel d 1 is similar to that of ABP in chemical communication, possibly via pheromone and/or steroid binding.

  19. Numerical modelling approach for mine backfill

    Indian Academy of Sciences (India)

    Muhammad Zaka Emad

    2017-07-24

    Jul 24, 2017 ... conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. The .... placed in an ore pass that leads the ore to the ore bin and crusher, from ... 1 year, depending on the mine plan.

  20. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  1. OILMAP: A global approach to spill modeling

    International Nuclear Information System (INIS)

    Spaulding, M.L.; Howlett, E.; Anderson, E.; Jayko, K.

    1992-01-01

    OILMAP is an oil spill model system suitable for use in both rapid response mode and long-range contingency planning. It was developed for a personal computer and employs full-color graphics to enter data, set up spill scenarios, and view model predictions. The major components of OILMAP include environmental data entry and viewing capabilities, the oil spill models, and model prediction display capabilities. Graphic routines are provided for entering wind data, currents, and any type of geographically referenced data. Several modes of the spill model are available. The surface trajectory mode is intended for quick spill response. The weathering model includes the spreading, evaporation, entrainment, emulsification, and shoreline interaction of oil. The stochastic and receptor models simulate a large number of trajectories from a single site for generating probability statistics. Each model and the algorithms they use are described. Several additional capabilities are planned for OILMAP, including simulation of tactical spill response and subsurface oil transport. 8 refs

  2. Relaxed memory models: an operational approach

    OpenAIRE

    Boudol , Gérard; Petri , Gustavo

    2009-01-01

    International audience; Memory models define an interface between programs written in some language and their implementation, determining which behaviour the memory (and thus a program) is allowed to have in a given model. A minimal guarantee memory models should provide to the programmer is that well-synchronized, that is, data-race free code has a standard semantics. Traditionally, memory models are defined axiomatically, setting constraints on the order in which memory operations are allow...

  3. Modeling composting kinetics: A review of approaches

    NARCIS (Netherlands)

    Hamelers, H.V.M.

    2004-01-01

    Composting kinetics modeling is necessary to design and operate composting facilities that comply with strict market demands and tight environmental legislation. Current composting kinetics modeling can be characterized as inductive, i.e. the data are the starting point of the modeling process and

  4. Conformally invariant models: A new approach

    International Nuclear Information System (INIS)

    Fradkin, E.S.; Palchik, M.Ya.; Zaikin, V.N.

    1996-02-01

    A pair of mathematical models of quantum field theory in D dimensions is analyzed, particularly, a model of a charged scalar field defined by two generations of secondary fields in the space of even dimensions D>=4 and a model of a neutral scalar field defined by two generations of secondary fields in two-dimensional space. 6 refs

  5. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  6. Robust Multiscale Modelling Of Two-Phase Steels On Heterogeneous Hardware Infrastructures By Using Statistically Similar Representative Volume Element

    Directory of Open Access Journals (Sweden)

    Rauch Ł.

    2015-09-01

    Full Text Available The coupled finite element multiscale simulations (FE2 require costly numerical procedures in both macro and micro scales. Attempts to improve numerical efficiency are focused mainly on two areas of development, i.e. parallelization/distribution of numerical procedures and simplification of virtual material representation. One of the representatives of both mentioned areas is the idea of Statistically Similar Representative Volume Element (SSRVE. It aims at the reduction of the number of finite elements in micro scale as well as at parallelization of the calculations in micro scale which can be performed without barriers. The simplification of computational domain is realized by transformation of sophisticated images of material microstructure into artificially created simple objects being characterized by similar features as their original equivalents. In existing solutions for two-phase steels SSRVE is created on the basis of the analysis of shape coefficients of hard phase in real microstructure and searching for a representative simple structure with similar shape coefficients. Optimization techniques were used to solve this task. In the present paper local strains and stresses are added to the cost function in optimization. Various forms of the objective function composed of different elements were investigated and used in the optimization procedure for the creation of the final SSRVE. The results are compared as far as the efficiency of the procedure and uniqueness of the solution are considered. The best objective function composed of shape coefficients, as well as of strains and stresses, was proposed. Examples of SSRVEs determined for the investigated two-phase steel using that objective function are demonstrated in the paper. Each step of SSRVE creation is investigated from computational efficiency point of view. The proposition of implementation of the whole computational procedure on modern High Performance Computing (HPC

  7. Reconstructing plateau icefields: Evaluating empirical and modelled approaches

    Science.gov (United States)

    Pearce, Danni; Rea, Brice; Barr, Iestyn

    2013-04-01

    Glacial landforms are widely utilised to reconstruct former glacier geometries with a common aim to estimate the Equilibrium Line Altitudes (ELAs) and from these, infer palaeoclimatic conditions. Such inferences may be studied on a regional scale and used to correlate climatic gradients across large distances (e.g., Europe). In Britain, the traditional approach uses geomorphological mapping with hand contouring to derive the palaeo-ice surface. Recently, ice surface modelling enables an equilibrium profile reconstruction tuned using the geomorphology. Both methods permit derivation of palaeo-climate but no study has compared the two methods for the same ice-mass. This is important because either approach may result in differences in glacier limits, ELAs and palaeo-climate. This research uses both methods to reconstruct a plateau icefield and quantifies the results from a cartographic and geometrical aspect. Detailed geomorphological mapping of the Tweedsmuir Hills in the Southern Uplands, Scotland (c. 320 km2) was conducted to examine the extent of Younger Dryas (YD; 12.9 -11.7 cal. ka BP) glaciation. Landform evidence indicates a plateau icefield configuration of two separate ice-masses during the YD covering an area c. 45 km2 and 25 km2. The interpreted age is supported by new radiocarbon dating of basal stratigraphies and Terrestrial Cosmogenic Nuclide Analysis (TCNA) of in situ boulders. Both techniques produce similar configurations however; the model results in a coarser resolution requiring further processing if a cartographic map is required. When landforms are absent or fragmentary (e.g., trimlines and lateral moraines), like in many accumulation zones on plateau icefields, the geomorphological approach increasingly relies on extrapolation between lines of evidence and on the individual's perception of how the ice-mass ought to look. In some locations this results in an underestimation of the ice surface compared to the modelled surface most likely due to

  8. Matter of Similarity and Dissimilarity in Multi-Ethnic Society: A Model of Dyadic Cultural Norms Congruence

    Directory of Open Access Journals (Sweden)

    Abu Bakar Hassan

    2017-01-01

    Full Text Available Taking this into consideration of diver cultural norms in Malaysian workplace, the propose model explores Malaysia culture and identity with a backdrop of the pushes and pulls of ethnic diversity in a Malaysia. The model seeks to understand relational norm congruence based on multiethnic in Malaysia that will be enable us to identify Malaysia cultural and identity. This is in line with recent call by various interest groups in Malaysia to focus more on model designs that capture contextual and cultural factors that influences Malaysia culture and identity.

  9. Testing Process Predictions of Models of Risky Choice: A Quantitative Model Comparison Approach

    Directory of Open Access Journals (Sweden)

    Thorsten ePachur

    2013-09-01

    Full Text Available This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or nonlinear functions thereof and the separate evaluation of risky options (expectation models. Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models. We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter, Gigerenzer, & Hertwig, 2006, and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up and direction of search (i.e., gamble-wise vs. reason-wise. In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly; acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988 called similarity. In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  10. Testing process predictions of models of risky choice: a quantitative model comparison approach

    Science.gov (United States)

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  11. 3D mapping, hydrodynamics and modelling of the freshwater-brine mixing zone in salt flats similar to the Salar de Atacama (Chile)

    Science.gov (United States)

    Marazuela, M. A.; Vázquez-Suñé, E.; Custodio, E.; Palma, T.; García-Gil, A.; Ayora, C.

    2018-06-01

    Salt flat brines are a major source of minerals and especially lithium. Moreover, valuable wetlands with delicate ecologies are also commonly present at the margins of salt flats. Therefore, the efficient and sustainable exploitation of the brines they contain requires detailed knowledge about the hydrogeology of the system. A critical issue is the freshwater-brine mixing zone, which develops as a result of the mass balance between the recharged freshwater and the evaporating brine. The complex processes occurring in salt flats require a three-dimensional (3D) approach to assess the mixing zone geometry. In this study, a 3D map of the mixing zone in a salt flat is presented, using the Salar de Atacama as an example. This mapping procedure is proposed as the basis of computationally efficient three-dimensional numerical models, provided that the hydraulic heads of freshwater and mixed waters are corrected based on their density variations to convert them into brine heads. After this correction, the locations of lagoons and wetlands that are characteristic of the marginal zones of the salt flats coincide with the regional minimum water (brine) heads. The different morphologies of the mixing zone resulting from this 3D mapping have been interpreted using a two-dimensional (2D) flow and transport numerical model of an idealized cross-section of the mixing zone. The result of the model shows a slope of the mixing zone that is similar to that obtained by 3D mapping and lower than in previous models. To explain this geometry, the 2D model was used to evaluate the effects of heterogeneity in the mixing zone geometry. The higher the permeability of the upper aquifer is, the lower the slope and the shallower the mixing zone become. This occurs because most of the freshwater lateral recharge flows through the upper aquifer due to its much higher transmissivity, thus reducing the freshwater head. The presence of a few meters of highly permeable materials in the upper part of

  12. A systemic approach to modelling of radiobiological effects

    International Nuclear Information System (INIS)

    Obaturov, G.M.

    1988-01-01

    Basic principles of the systemic approach to modelling of the radiobiological effects at different levels of cell organization have been formulated. The methodology is proposed for theoretical modelling of the effects at these levels

  13. Serpentinization reaction pathways: implications for modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Janecky, D.R.

    1986-01-01

    Experimental seawater-peridotite reaction pathways to form serpentinites at 300/sup 0/C, 500 bars, can be accurately modeled using the EQ3/6 codes in conjunction with thermodynamic and kinetic data from the literature and unpublished compilations. These models provide both confirmation of experimental interpretations and more detailed insight into hydrothermal reaction processes within the oceanic crust. The accuracy of these models depends on careful evaluation of the aqueous speciation model, use of mineral compositions that closely reproduce compositions in the experiments, and definition of realistic reactive components in terms of composition, thermodynamic data, and reaction rates.

  14. Consumer preference models: fuzzy theory approach

    Science.gov (United States)

    Turksen, I. B.; Wilson, I. A.

    1993-12-01

    Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).

  15. A visual approach for modeling spatiotemporal relations

    NARCIS (Netherlands)

    R.L. Guimarães (Rodrigo); C.S.S. Neto; L.F.G. Soares

    2008-01-01

    htmlabstractTextual programming languages have proven to be difficult to learn and to use effectively for many people. For this sake, visual tools can be useful to abstract the complexity of such textual languages, minimizing the specification efforts. In this paper we present a visual approach for

  16. PRODUCT TRIAL PROCESSING (PTP): A MODEL APPROACH ...

    African Journals Online (AJOL)

    Admin

    This study is a theoretical approach to consumer's processing of product trail, and equally explored ... consumer's first usage experience with a company's brand or product that is most important in determining ... product, what it is really marketing is the expected ..... confidence, thus there is a positive relationship between ...

  17. Nonlinear Modeling of the PEMFC Based On NNARX Approach

    OpenAIRE

    Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo

    2015-01-01

    Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...

  18. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  19. Self-similar decay to the marginally stable ground state in a model for film flow over inclined wavy bottoms

    Directory of Open Access Journals (Sweden)

    Tobias Hacker

    2012-04-01

    Full Text Available The integral boundary layer system (IBL with spatially periodic coefficients arises as a long wave approximation for the flow of a viscous incompressible fluid down a wavy inclined plane. The Nusselt-like stationary solution of the IBL is linearly at best marginally stable; i.e., it has essential spectrum at least up to the imaginary axis. Nevertheless, in this stable case we show that localized perturbations of the ground state decay in a self-similar way. The proof uses the renormalization group method in Bloch variables and the fact that in the stable case the Burgers equation is the amplitude equation for long waves of small amplitude in the IBL. It is the first time that such a proof is given for a quasilinear PDE with spatially periodic coefficients.

  20. Comparison of two novel approaches to model fibre reinforced concrete

    NARCIS (Netherlands)

    Radtke, F.K.F.; Simone, A.; Sluys, L.J.

    2009-01-01

    We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity

  1. The Effect of a Model's HIV Status on Self-Perceptions: A Self-Protective Similarity Bias.

    Science.gov (United States)

    Gump, Brooks B.; Kulik, James A.

    1995-01-01

    Examined how information about another person's HIV status influences self-perceptions and behavioral intentions. Individuals perceived their own personalities and behaviors as more dissimilar to anther's if that person's HIV status was believed positive compared with negative or unknown. Exposure to HIV-positive model produced greater intentions…

  2. Modeling thrombin generation: plasma composition based approach.

    Science.gov (United States)

    Brummel-Ziedins, Kathleen E; Everse, Stephen J; Mann, Kenneth G; Orfeo, Thomas

    2014-01-01

    Thrombin has multiple functions in blood coagulation and its regulation is central to maintaining the balance between hemorrhage and thrombosis. Empirical and computational methods that capture thrombin generation can provide advancements to current clinical screening of the hemostatic balance at the level of the individual. In any individual, procoagulant and anticoagulant factor levels together act to generate a unique coagulation phenotype (net balance) that is reflective of the sum of its developmental, environmental, genetic, nutritional and pharmacological influences. Defining such thrombin phenotypes may provide a means to track disease progression pre-crisis. In this review we briefly describe thrombin function, methods for assessing thrombin dynamics as a phenotypic marker, computationally derived thrombin phenotypes versus determined clinical phenotypes, the boundaries of normal range thrombin generation using plasma composition based approaches and the feasibility of these approaches for predicting risk.

  3. A simple approach to modeling ductile failure.

    Energy Technology Data Exchange (ETDEWEB)

    Wellman, Gerald William

    2012-06-01

    Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.

  4. A new approach for modeling composite materials

    Science.gov (United States)

    Alcaraz de la Osa, R.; Moreno, F.; Saiz, J. M.

    2013-03-01

    The increasing use of composite materials is due to their ability to tailor materials for special purposes, with applications evolving day by day. This is why predicting the properties of these systems from their constituents, or phases, has become so important. However, assigning macroscopical optical properties for these materials from the bulk properties of their constituents is not a straightforward task. In this research, we present a spectral analysis of three-dimensional random composite typical nanostructures using an Extension of the Discrete Dipole Approximation (E-DDA code), comparing different approaches and emphasizing the influences of optical properties of constituents and their concentration. In particular, we hypothesize a new approach that preserves the individual nature of the constituents introducing at the same time a variation in the optical properties of each discrete element that is driven by the surrounding medium. The results obtained with this new approach compare more favorably with the experiment than previous ones. We have also applied it to a non-conventional material composed of a metamaterial embedded in a dielectric matrix. Our version of the Discrete Dipole Approximation code, the EDDA code, has been formulated specifically to tackle this kind of problem, including materials with either magnetic and tensor properties.

  5. An Integrated Approach to Modeling Evacuation Behavior

    Science.gov (United States)

    2011-02-01

    A spate of recent hurricanes and other natural disasters have drawn a lot of attention to the evacuation decision of individuals. Here we focus on evacuation models that incorporate two economic phenomena that seem to be increasingly important in exp...

  6. Infectious disease modeling a hybrid system approach

    CERN Document Server

    Liu, Xinzhi

    2017-01-01

    This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.

  7. On Combining Language Models: Oracle Approach

    National Research Council Canada - National Science Library

    Hacioglu, Kadri; Ward, Wayne

    2001-01-01

    In this paper, we address the of combining several language models (LMs). We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle...

  8. Do recommender systems benefit users? a modeling approach

    Science.gov (United States)

    Yeung, Chi Ho

    2016-04-01

    Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.

  9. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  10. Approaches to modelling hydrology and ecosystem interactions

    Science.gov (United States)

    Silberstein, Richard P.

    2014-05-01

    As the pressures of industry, agriculture and mining on groundwater resources increase there is a burgeoning un-met need to be able to capture these multiple, direct and indirect stresses in a formal framework that will enable better assessment of impact scenarios. While there are many catchment hydrological models and there are some models that represent ecological states and change (e.g. FLAMES, Liedloff and Cook, 2007), these have not been linked in any deterministic or substantive way. Without such coupled eco-hydrological models quantitative assessments of impacts from water use intensification on water dependent ecosystems under changing climate are difficult, if not impossible. The concept would include facility for direct and indirect water related stresses that may develop around mining and well operations, climate stresses, such as rainfall and temperature, biological stresses, such as diseases and invasive species, and competition such as encroachment from other competing land uses. Indirect water impacts could be, for example, a change in groundwater conditions has an impact on stream flow regime, and hence aquatic ecosystems. This paper reviews previous work examining models combining ecology and hydrology with a view to developing a conceptual framework linking a biophysically defensable model that combines ecosystem function with hydrology. The objective is to develop a model capable of representing the cumulative impact of multiple stresses on water resources and associated ecosystem function.

  11. Constructing a justice model based on Sen's capability approach

    OpenAIRE

    Yüksel, Sevgi; Yuksel, Sevgi

    2008-01-01

    The thesis provides a possible justice model based on Sen's capability approach. For this goal, we first analyze the general structure of a theory of justice, identifying the main variables and issues. Furthermore, based on Sen (2006) and Kolm (1998), we look at 'transcendental' and 'comparative' approaches to justice and concentrate on the sufficiency condition for the comparative approach. Then, taking Rawls' theory of justice as a starting point, we present how Sen's capability approach em...

  12. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  13. Statistical Similarities Between WSA-ENLIL+Cone Model and MAVEN in Situ Observations From November 2014 to March 2016

    Science.gov (United States)

    Lentz, C. L.; Baker, D. N.; Jaynes, A. N.; Dewey, R. M.; Lee, C. O.; Halekas, J. S.; Brain, D. A.

    2018-02-01

    Normal solar wind flows and intense solar transient events interact directly with the upper Martian atmosphere due to the absence of an intrinsic global planetary magnetic field. Since the launch of the Mars Atmosphere and Volatile EvolutioN (MAVEN) mission, there are now new means to directly observe solar wind parameters at the planet's orbital location for limited time spans. Due to MAVEN's highly elliptical orbit, in situ measurements cannot be taken while MAVEN is inside Mars' magnetosheath. To model solar wind conditions during these atmospheric and magnetospheric passages, this research project utilized the solar wind forecasting capabilities of the WSA-ENLIL+Cone model. The model was used to simulate solar wind parameters that included magnetic field magnitude, plasma particle density, dynamic pressure, proton temperature, and velocity during a four Carrington rotation-long segment. An additional simulation that lasted 18 Carrington rotations was then conducted. The precision of each simulation was examined for intervals when MAVEN was in the upstream solar wind, that is, with no exospheric or magnetospheric phenomena altering in situ measurements. It was determined that generalized, extensive simulations have comparable prediction capabilities as shorter, more comprehensive simulations. Generally, this study aimed to quantify the loss of detail in long-term simulations and to determine if extended simulations can provide accurate, continuous upstream solar wind conditions when there is a lack of in situ measurements.

  14. Evaluation of approaches focused on modelling of organic carbon stocks using the RothC model

    Science.gov (United States)

    Koco, Štefan; Skalský, Rastislav; Makovníková, Jarmila; Tarasovičová, Zuzana; Barančíková, Gabriela

    2014-05-01

    The aim of current efforts in the European area is the protection of soil organic matter, which is included in all relevant documents related to the protection of soil. The use of modelling of organic carbon stocks for anticipated climate change, respectively for land management can significantly help in short and long-term forecasting of the state of soil organic matter. RothC model can be applied in the time period of several years to centuries and has been tested in long-term experiments within a large range of soil types and climatic conditions in Europe. For the initialization of the RothC model, knowledge about the carbon pool sizes is essential. Pool size characterization can be obtained from equilibrium model runs, but this approach is time consuming and tedious, especially for larger scale simulations. Due to this complexity we search for new possibilities how to simplify and accelerate this process. The paper presents a comparison of two approaches for SOC stocks modelling in the same area. The modelling has been carried out on the basis of unique input of land use, management and soil data for each simulation unit separately. We modeled 1617 simulation units of 1x1 km grid on the territory of agroclimatic region Žitný ostrov in the southwest of Slovakia. The first approach represents the creation of groups of simulation units based on the evaluation of results for simulation unit with similar input values. The groups were created after the testing and validation of modelling results for individual simulation units with results of modelling the average values of inputs for the whole group. Tests of equilibrium model for interval in the range 5 t.ha-1 from initial SOC stock showed minimal differences in results comparing with result for average value of whole interval. Management inputs data from plant residues and farmyard manure for modelling of carbon turnover were also the same for more simulation units. Combining these groups (intervals of initial

  15. An ontology-based approach for modelling architectural styles

    OpenAIRE

    Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm

    2007-01-01

    peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...

  16. Mathematical modelling a case studies approach

    CERN Document Server

    Illner, Reinhard; McCollum, Samantha; Roode, Thea van

    2004-01-01

    Mathematical modelling is a subject without boundaries. It is the means by which mathematics becomes useful to virtually any subject. Moreover, modelling has been and continues to be a driving force for the development of mathematics itself. This book explains the process of modelling real situations to obtain mathematical problems that can be analyzed, thus solving the original problem. The presentation is in the form of case studies, which are developed much as they would be in true applications. In many cases, an initial model is created, then modified along the way. Some cases are familiar, such as the evaluation of an annuity. Others are unique, such as the fascinating situation in which an engineer, armed only with a slide rule, had 24 hours to compute whether a valve would hold when a temporary rock plug was removed from a water tunnel. Each chapter ends with a set of exercises and some suggestions for class projects. Some projects are extensive, as with the explorations of the predator-prey model; oth...

  17. Multilayered epithelium in a rat model and human Barrett's esophagus: Similar expression patterns of transcription factors and differentiation markers

    Directory of Open Access Journals (Sweden)

    Yang Chung S

    2008-01-01

    Full Text Available Abstract Background In rats, esophagogastroduodenal anastomosis (EGDA without concomitant chemical carcinogen treatment leads to gastroesophageal reflux disease, multilayered epithelium (MLE, a presumed precursor in intestinal metaplasia, columnar-lined esophagus, dysplasia, and esophageal adenocarcinoma. Previously we have shown that columnar-lined esophagus in EGDA rats resembled human Barrett's esophagus (BE in its morphology, mucin features and expression of differentiation markers (Lab. Invest. 2004;84:753–765. The purpose of this study was to compare the phenotype of rat MLE with human MLE, in order to gain insight into the nature of MLE and its potential role in the development of BE. Methods Serial sectioning was performed on tissue samples from 32 EGDA rats and 13 patients with established BE. Tissue sections were immunohistochemically stained for a variety of transcription factors and differentiation markers of esophageal squamous epithelium and intestinal columnar epithelium. Results We detected MLE in 56.3% (18/32 of EGDA rats, and in all human samples. As expected, both rat and human squamous epithelium, but not intestinal metaplasia, expressed squamous transcription factors and differentiation markers (p63, Sox2, CK14 and CK4 in all cases. Both rat and human intestinal metaplasia, but not squamous epithelium, expressed intestinal transcription factors and differentiation markers (Cdx2, GATA4, HNF1α, villin and Muc2 in all cases. Rat MLE shared expression patterns of Sox2, CK4, Cdx2, GATA4, villin and Muc2 with human MLE. However, p63 and CK14 were expressed in a higher proportion of rat MLE compared to humans. Conclusion These data indicate that rat MLE shares similar properties to human MLE in its expression pattern of these markers, not withstanding small differences, and support the concept that MLE may be a transitional stage in the metaplastic conversion of squamous to columnar epithelium in BE.

  18. The simplified models approach to constraining supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Genessis [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie (KIT), Wolfgang-Gaede-Str. 1, 76131 Karlsruhe (Germany); Kulkarni, Suchita [Laboratoire de Physique Subatomique et de Cosmologie, Universite Grenoble Alpes, CNRS IN2P3, 53 Avenue des Martyrs, 38026 Grenoble (France)

    2015-07-01

    The interpretation of the experimental results at the LHC are model dependent, which implies that the searches provide limited constraints on scenarios such as supersymmetry (SUSY). The Simplified Models Spectra (SMS) framework used by ATLAS and CMS collaborations is useful to overcome this limitation. SMS framework involves a small number of parameters (all the properties are reduced to the mass spectrum, the production cross section and the branching ratio) and hence is more generic than presenting results in terms of soft parameters. In our work, the SMS framework was used to test Natural SUSY (NSUSY) scenario. To accomplish this task, two automated tools (SModelS and Fastlim) were used to decompose the NSUSY parameter space in terms of simplified models and confront the theoretical predictions against the experimental results. The achievement of both, just as the strengths and limitations, are here expressed for the NSUSY scenario.

  19. Lightweight approach to model traceability in a CASE tool

    Science.gov (United States)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  20. New approaches for modeling type Ia supernovae

    International Nuclear Information System (INIS)

    Zingale, Michael; Almgren, Ann S.; Bell, John B.; Day, Marcus S.; Rendleman, Charles A.; Woosley, Stan

    2007-01-01

    Type Ia supernovae (SNe Ia) are the largest thermonuclear explosions in the Universe. Their light output can be seen across great distances and has led to the discovery that the expansion rate of the Universe is accelerating. Despite the significance of SNe Ia, there are still a large number of uncertainties in current theoretical models. Computational modeling offers the promise to help answer the outstanding questions. However, even with today's supercomputers, such calculations are extremely challenging because of the wide range of length and timescales. In this paper, we discuss several new algorithms for simulations of SNe Ia and demonstrate some of their successes

  1. Chancroid transmission dynamics: a mathematical modeling approach.

    Science.gov (United States)

    Bhunu, C P; Mushayabasa, S

    2011-12-01

    Mathematical models have long been used to better understand disease transmission dynamics and how to effectively control them. Here, a chancroid infection model is presented and analyzed. The disease-free equilibrium is shown to be globally asymptotically stable when the reproduction number is less than unity. High levels of treatment are shown to reduce the reproduction number suggesting that treatment has the potential to control chancroid infections in any given community. This result is also supported by numerical simulations which show a decline in chancroid cases whenever the reproduction number is less than unity.

  2. A kinetic approach to magnetospheric modeling

    International Nuclear Information System (INIS)

    Whipple, E.C. Jr.

    1979-01-01

    The earth's magnetosphere is caused by the interaction between the flowing solar wind and the earth's magnetic dipole, with the distorted magnetic field in the outer parts of the magnetosphere due to the current systems resulting from this interaction. It is surprising that even the conceptually simple problem of the collisionless interaction of a flowing plasma with a dipole magnetic field has not been solved. A kinetic approach is essential if one is to take into account the dispersion of particles with different energies and pitch angles and the fact that particles on different trajectories have different histories and may come from different sources. Solving the interaction problem involves finding the various types of possible trajectories, populating them with particles appropriately, and then treating the electric and magnetic fields self-consistently with the resulting particle densities and currents. This approach is illustrated by formulating a procedure for solving the collisionless interaction problem on open field lines in the case of a slowly flowing magnetized plasma interacting with a magnetic dipole

  3. A kinetic approach to magnetospheric modeling

    Science.gov (United States)

    Whipple, E. C., Jr.

    1979-01-01

    The earth's magnetosphere is caused by the interaction between the flowing solar wind and the earth's magnetic dipole, with the distorted magnetic field in the outer parts of the magnetosphere due to the current systems resulting from this interaction. It is surprising that even the conceptually simple problem of the collisionless interaction of a flowing plasma with a dipole magnetic field has not been solved. A kinetic approach is essential if one is to take into account the dispersion of particles with different energies and pitch angles and the fact that particles on different trajectories have different histories and may come from different sources. Solving the interaction problem involves finding the various types of possible trajectories, populating them with particles appropriately, and then treating the electric and magnetic fields self-consistently with the resulting particle densities and currents. This approach is illustrated by formulating a procedure for solving the collisionless interaction problem on open field lines in the case of a slowly flowing magnetized plasma interacting with a magnetic dipole.

  4. The role of intergenerational similarity and parenting in adolescent self-criticism: An actor-partner interdependence model.

    Science.gov (United States)

    Bleys, Dries; Soenens, Bart; Boone, Liesbet; Claes, Stephan; Vliegen, Nicole; Luyten, Patrick

    2016-06-01

    Research investigating the development of adolescent self-criticism has typically focused on the role of either parental self-criticism or parenting. This study used an actor-partner interdependence model to examine an integrated theoretical model in which achievement-oriented psychological control has an intervening role in the relation between parental and adolescent self-criticism. Additionally, the relative contribution of both parents and the moderating role of adolescent gender were examined. Participants were 284 adolescents (M = 14 years, range = 12-16 years) and their parents (M = 46 years, range = 32-63 years). Results showed that only maternal self-criticism was directly related to adolescent self-criticism. However, both parents' achievement-oriented psychological control had an intervening role in the relation between parent and adolescent self-criticism in both boys and girls. Moreover, one parent's achievement-oriented psychological control was not predicted by the self-criticism of the other parent. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  5. A novel approach to modeling atmospheric convection

    Science.gov (United States)

    Goodman, A.

    2016-12-01

    The inadequate representation of clouds continues to be a large source of uncertainty in the projections from global climate models (GCMs). With continuous advances in computational power, however, the ability for GCMs to explicitly resolve cumulus convection will soon be realized. For this purpose, Jung and Arakawa (2008) proposed the Vector Vorticity Model (VVM), in which vorticity is the predicted variable instead of momentum. This has the advantage of eliminating the pressure gradient force within the framework of an anelastic system. However, the VVM was designed for use on a planar quadrilateral grid, making it unsuitable for implementation in global models discretized on the sphere. Here we have proposed a modification to the VVM where instead the curl of the horizontal vorticity is the primary predicted variable. This allows us to maintain the benefits of the original VVM while working within the constraints of a non-quadrilateral mesh. We found that our proposed model produced results from a warm bubble simulation that were consistent with the VVM. Further improvements that can be made to the VVM are also discussed.

  6. INDIVIDUAL BASED MODELLING APPROACH TO THERMAL ...

    Science.gov (United States)

    Diadromous fish populations in the Pacific Northwest face challenges along their migratory routes from declining habitat quality, harvest, and barriers to longitudinal connectivity. Changes in river temperature regimes are producing an additional challenge for upstream migrating adult salmon and steelhead, species that are sensitive to absolute and cumulative thermal exposure. Adult salmon populations have been shown to utilize cold water patches along migration routes when mainstem river temperatures exceed thermal optimums. We are employing an individual based model (IBM) to explore the costs and benefits of spatially-distributed cold water refugia for adult migrating salmon. Our model, developed in the HexSim platform, is built around a mechanistic behavioral decision tree that drives individual interactions with their spatially explicit simulated environment. Population-scale responses to dynamic thermal regimes, coupled with other stressors such as disease and harvest, become emergent properties of the spatial IBM. Other model outputs include arrival times, species-specific survival rates, body energetic content, and reproductive fitness levels. Here, we discuss the challenges associated with parameterizing an individual based model of salmon and steelhead in a section of the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec

  7. A new approach to model mixed hydrates

    Czech Academy of Sciences Publication Activity Database

    Hielscher, S.; Vinš, Václav; Jäger, A.; Hrubý, Jan; Breitkopf, C.; Span, R.

    2018-01-01

    Roč. 459, March (2018), s. 170-185 ISSN 0378-3812 R&D Projects: GA ČR(CZ) GA17-08218S Institutional support: RVO:61388998 Keywords : gas hydrate * mixture * modeling Subject RIV: BJ - Thermodynamics Impact factor: 2.473, year: 2016 https://www.sciencedirect.com/science/article/pii/S0378381217304983

  8. Energy and development : A modelling approach

    NARCIS (Netherlands)

    van Ruijven, B.J.|info:eu-repo/dai/nl/304834521

    2008-01-01

    Rapid economic growth of developing countries like India and China implies that these countries become important actors in the global energy system. Examples of this impact are the present day oil shortages and rapidly increasing emissions of greenhouse gases. Global energy models are used explore

  9. Modeling Approaches for Describing Microbial Population Heterogeneity

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita

    environmental conditions. Three cases are presented and discussed in this thesis. Common to all is the use of S. cerevisiae as model organism, and the use of cell size and cell cycle position as single-cell descriptors. The first case focuses on the experimental and mathematical description of a yeast...

  10. Energy and Development. A Modelling Approach

    International Nuclear Information System (INIS)

    Van Ruijven, B.J.

    2008-01-01

    Rapid economic growth of developing countries like India and China implies that these countries become important actors in the global energy system. Examples of this impact are the present day oil shortages and rapidly increasing emissions of greenhouse gases. Global energy models are used to explore possible future developments of the global energy system and identify policies to prevent potential problems. Such estimations of future energy use in developing countries are very uncertain. Crucial factors in the future energy use of these regions are electrification, urbanisation and income distribution, issues that are generally not included in present day global energy models. Model simulations in this thesis show that current insight in developments in low-income regions lead to a wide range of expected energy use in 2030 of the residential and transport sectors. This is mainly caused by many different model calibration options that result from the limited data availability for model development and calibration. We developed a method to identify the impact of model calibration uncertainty on future projections. We developed a new model for residential energy use in India, in collaboration with the Indian Institute of Science. Experiments with this model show that the impact of electrification and income distribution is less univocal than often assumed. The use of fuelwood, with related health risks, can decrease rapidly if the income of poor groups increases. However, there is a trade off in terms of CO2 emissions because these groups gain access to electricity and the ownership of appliances increases. Another issue is the potential role of new technologies in developing countries: will they use the opportunities of leapfrogging? We explored the potential role of hydrogen, an energy carrier that might play a central role in a sustainable energy system. We found that hydrogen only plays a role before 2050 under very optimistic assumptions. Regional energy

  11. 68Ga/177Lu-labeled DOTA-TATE shows similar imaging and biodistribution in neuroendocrine tumor model.

    Science.gov (United States)

    Liu, Fei; Zhu, Hua; Yu, Jiangyuan; Han, Xuedi; Xie, Qinghua; Liu, Teli; Xia, Chuanqin; Li, Nan; Yang, Zhi

    2017-06-01

    Somatostatin receptors are overexpressed in neuroendocrine tumors, whose endogenous ligands are somatostatin. DOTA-TATE is an analogue of somatostatin, which shows high binding affinity to somatostatin receptors. We aim to evaluate the 68 Ga/ 177 Lu-labeling DOTA-TATE kit in neuroendocrine tumor model for molecular imaging and to try human-positron emission tomography/computed tomography imaging of 68 Ga-DOTA-TATE in neuroendocrine tumor patients. DOTA-TATE kits were formulated and radiolabeled with 68 Ga/ 177 Lu for 68 Ga/ 177 Lu-DOTA-TATE (M-DOTA-TATE). In vitro and in vivo stability of 177 Lu-DOTA-TATE were performed. Nude mice bearing human tumors were injected with 68 Ga-DOTA-TATE or 177 Lu-DOTA-TATE for micro-positron emission tomography and micro-single-photon emission computed tomography/computed tomography imaging separately, and clinical positron emission tomography/computed tomography images of 68 Ga-DOTA-TATE were obtained at 1 h post-intravenous injection from patients with neuroendocrine tumors. Micro-positron emission tomography and micro-single-photon emission computed tomography/computed tomography imaging of 68 Ga-DOTA-TATE and 177 Lu-DOTA-TATE both showed clear tumor uptake which could be blocked by excess DOTA-TATE. In addition, 68 Ga-DOTA-TATE-positron emission tomography/computed tomography imaging in neuroendocrine tumor patients could show primary and metastatic lesions. 68 Ga-DOTA-TATE and 177 Lu-DOTA-TATE could accumulate in tumors in animal models, paving the way for better clinical peptide receptor radionuclide therapy for neuroendocrine tumor patients in Asian population.

  12. MULTI-LEVEL SAMPLING APPROACH FOR CONTINOUS LOSS DETECTION USING ITERATIVE WINDOW AND STATISTICAL MODEL

    OpenAIRE

    Mohd Fo'ad Rohani; Mohd Aizaini Maarof; Ali Selamat; Houssain Kettani

    2010-01-01

    This paper proposes a Multi-Level Sampling (MLS) approach for continuous Loss of Self-Similarity (LoSS) detection using iterative window. The method defines LoSS based on Second Order Self-Similarity (SOSS) statistical model. The Optimization Method (OM) is used to estimate self-similarity parameter since it is fast and more accurate in comparison with other estimation methods known in the literature. Probability of LoSS detection is introduced to measure continuous LoSS detection performance...

  13. Integration models: multicultural and liberal approaches confronted

    Science.gov (United States)

    Janicki, Wojciech

    2012-01-01

    European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.

  14. Modelling thermal plume impacts - Kalpakkam approach

    International Nuclear Information System (INIS)

    Rao, T.S.; Anup Kumar, B.; Narasimhan, S.V.

    2002-01-01

    A good understanding of temperature patterns in the receiving waters is essential to know the heat dissipation from thermal plumes originating from coastal power plants. The seasonal temperature profiles of the Kalpakkam coast near Madras Atomic Power Station (MAPS) thermal out fall site are determined and analysed. It is observed that the seasonal current reversal in the near shore zone is one of the major mechanisms for the transport of effluents away from the point of mixing. To further refine our understanding of the mixing and dilution processes, it is necessary to numerically simulate the coastal ocean processes by parameterising the key factors concerned. In this paper, we outline the experimental approach to achieve this objective. (author)

  15. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  16. Nuclear security assessment with Markov model approach

    International Nuclear Information System (INIS)

    Suzuki, Mitsutoshi; Terao, Norichika

    2013-01-01

    Nuclear security risk assessment with the Markov model based on random event is performed to explore evaluation methodology for physical protection in nuclear facilities. Because the security incidences are initiated by malicious and intentional acts, expert judgment and Bayes updating are used to estimate scenario and initiation likelihood, and it is assumed that the Markov model derived from stochastic process can be applied to incidence sequence. Both an unauthorized intrusion as Design Based Threat (DBT) and a stand-off attack as beyond-DBT are assumed to hypothetical facilities, and performance of physical protection and mitigation and minimization of consequence are investigated to develop the assessment methodology in a semi-quantitative manner. It is shown that cooperation between facility operator and security authority is important to respond to the beyond-DBT incidence. (author)

  17. An Approach for Modeling Supplier Resilience

    Science.gov (United States)

    2016-04-30

    interests include resilience modeling of supply chains, reliability engineering, and meta- heuristic optimization. [m.hosseini@ou.edu] Abstract...be availability , or the extent to which the products produced by the supply chain are available for use (measured as a ratio of uptime to total time...of the use of the product). Available systems are important in many industries, particularly in the Department of Defense, where weapons systems

  18. Tumour resistance to cisplatin: a modelling approach

    International Nuclear Information System (INIS)

    Marcu, L; Bezak, E; Olver, I; Doorn, T van

    2005-01-01

    Although chemotherapy has revolutionized the treatment of haematological tumours, in many common solid tumours the success has been limited. Some of the reasons for the limitations are: the timing of drug delivery, resistance to the drug, repopulation between cycles of chemotherapy and the lack of complete understanding of the pharmacokinetics and pharmacodynamics of a specific agent. Cisplatin is among the most effective cytotoxic agents used in head and neck cancer treatments. When modelling cisplatin as a single agent, the properties of cisplatin only have to be taken into account, reducing the number of assumptions that are considered in the generalized chemotherapy models. The aim of the present paper is to model the biological effect of cisplatin and to simulate the consequence of cisplatin resistance on tumour control. The 'treated' tumour is a squamous cell carcinoma of the head and neck, previously grown by computer-based Monte Carlo techniques. The model maintained the biological constitution of a tumour through the generation of stem cells, proliferating cells and non-proliferating cells. Cell kinetic parameters (mean cell cycle time, cell loss factor, thymidine labelling index) were also consistent with the literature. A sensitivity study on the contribution of various mechanisms leading to drug resistance is undertaken. To quantify the extent of drug resistance, the cisplatin resistance factor (CRF) is defined as the ratio between the number of surviving cells of the resistant population and the number of surviving cells of the sensitive population, determined after the same treatment time. It is shown that there is a supra-linear dependence of CRF on the percentage of cisplatin-DNA adducts formed, and a sigmoid-like dependence between CRF and the percentage of cells killed in resistant tumours. Drug resistance is shown to be a cumulative process which eventually can overcome tumour regression leading to treatment failure

  19. Tumour resistance to cisplatin: a modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    Marcu, L [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia); Bezak, E [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia); Olver, I [Faculty of Medicine, University of Adelaide, North Terrace, SA 5000 (Australia); Doorn, T van [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia)

    2005-01-07

    Although chemotherapy has revolutionized the treatment of haematological tumours, in many common solid tumours the success has been limited. Some of the reasons for the limitations are: the timing of drug delivery, resistance to the drug, repopulation between cycles of chemotherapy and the lack of complete understanding of the pharmacokinetics and pharmacodynamics of a specific agent. Cisplatin is among the most effective cytotoxic agents used in head and neck cancer treatments. When modelling cisplatin as a single agent, the properties of cisplatin only have to be taken into account, reducing the number of assumptions that are considered in the generalized chemotherapy models. The aim of the present paper is to model the biological effect of cisplatin and to simulate the consequence of cisplatin resistance on tumour control. The 'treated' tumour is a squamous cell carcinoma of the head and neck, previously grown by computer-based Monte Carlo techniques. The model maintained the biological constitution of a tumour through the generation of stem cells, proliferating cells and non-proliferating cells. Cell kinetic parameters (mean cell cycle time, cell loss factor, thymidine labelling index) were also consistent with the literature. A sensitivity study on the contribution of various mechanisms leading to drug resistance is undertaken. To quantify the extent of drug resistance, the cisplatin resistance factor (CRF) is defined as the ratio between the number of surviving cells of the resistant population and the number of surviving cells of the sensitive population, determined after the same treatment time. It is shown that there is a supra-linear dependence of CRF on the percentage of cisplatin-DNA adducts formed, and a sigmoid-like dependence between CRF and the percentage of cells killed in resistant tumours. Drug resistance is shown to be a cumulative process which eventually can overcome tumour regression leading to treatment failure.

  20. ISM Approach to Model Offshore Outsourcing Risks

    Directory of Open Access Journals (Sweden)

    Sunand Kumar

    2014-07-01

    Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing.  To this effect, authors have identified various risks through extant review of literature.  From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled.  Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.

  1. Remote sensing approach to structural modelling

    International Nuclear Information System (INIS)

    El Ghawaby, M.A.

    1989-01-01

    Remote sensing techniques are quite dependable tools in investigating geologic problems, specially those related to structural aspects. The Landsat imagery provides discrimination between rock units, detection of large scale structures as folds and faults, as well as small scale fabric elements such as foliation and banding. In order to fulfill the aim of geologic application of remote sensing, some essential surveying maps might be done from images prior to the structural interpretation: land-use, land-form drainage pattern, lithological unit and structural lineament maps. Afterwards, the field verification should lead to interpretation of a comprehensive structural model of the study area to apply for the target problem. To deduce such a model, there are two ways of analysis the interpreter may go through: the direct and the indirect methods. The direct one is needed in cases where the resources or the targets are controlled by an obvious or exposed structural element or pattern. The indirect way is necessary for areas where the target is governed by a complicated structural pattern. Some case histories of structural modelling methods applied successfully for exploration of radioactive minerals, iron deposits and groundwater aquifers in Egypt are presented. The progress in imagery, enhancement and integration of remote sensing data with the other geophysical and geochemical data allow a geologic interpretation to be carried out which become better than that achieved with either of the individual data sets. 9 refs

  2. Inspiration or deflation? Feeling similar or dissimilar to slim and plus-size models affects self-evaluation of restrained eaters.

    Science.gov (United States)

    Papies, Esther K; Nicolaije, Kim A H

    2012-01-01

    The present studies examined the effect of perceiving images of slim and plus-size models on restrained eaters' self-evaluation. While previous research has found that such images can lead to either inspiration or deflation, we argue that these inconsistencies can be explained by differences in perceived similarity with the presented model. The results of two studies (ns=52 and 99) confirmed this and revealed that restrained eaters with high (low) perceived similarity to the model showed more positive (negative) self-evaluations when they viewed a slim model, compared to a plus-size model. In addition, Study 2 showed that inducing in participants a similarities mindset led to more positive self-evaluations after viewing a slim compared to a plus-size model, but only among restrained eaters with a relatively high BMI. These results are discussed in the context of research on social comparison processes and with regard to interventions for protection against the possible detrimental effects of media images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. A moving approach for the Vector Hysteron Model

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)

    2016-04-01

    A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.

  4. A choice modelling analysis on the similarity between distribution utilities' and industrial customers' price and quality preferences

    International Nuclear Information System (INIS)

    Soederberg, Magnus

    2008-01-01

    The Swedish Electricity Act states that electricity distribution must comply with both price and quality requirements. In order to maintain efficient regulation it is necessary to firstly, define quality attributes and secondly, determine a customer's priorities concerning price and quality attributes. If distribution utilities gain an understanding of customer preferences and incentives for reporting them, the regulator can save a lot of time by surveying them rather than their customers. This study applies a choice modelling methodology where utilities and industrial customers are asked to evaluate the same twelve choice situations in which price and four specific quality attributes are varied. The preferences expressed by the utilities, and estimated by a random parameter logit, correspond quite well with the preferences expressed by the largest industrial customers. The preferences expressed by the utilities are reasonably homogenous in relation to forms of association (private limited, public and trading partnership). If the regulator acts according to the preferences expressed by the utilities, smaller industrial customers will have to pay for quality they have not asked for. (author)

  5. Similarities and differences of serotonin and its precursors in their interactions with model membranes studied by molecular dynamics simulation

    Science.gov (United States)

    Wood, Irene; Martini, M. Florencia; Pickholz, Mónica

    2013-08-01

    In this work, we report a molecular dynamics (MD) simulations study of relevant biological molecules as serotonin (neutral and protonated) and its precursors, tryptophan and 5-hydroxy-tryptophan, in a fully hydrated bilayer of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidyl-choline (POPC). The simulations were carried out at the fluid lamellar phase of POPC at constant pressure and temperature conditions. Two guest molecules of each type were initially placed at the water phase. We have analyzed, the main localization, preferential orientation and specific interactions of the guest molecules within the bilayer. During the simulation run, the four molecules were preferentially found at the water-lipid interphase. We found that the interactions that stabilized the systems are essentially hydrogen bonds, salt bridges and cation-π. None of the guest molecules have access to the hydrophobic region of the bilayer. Besides, zwitterionic molecules have access to the water phase, while protonated serotonin is anchored in the interphase. Even taking into account that these simulations were done using a model membrane, our results suggest that the studied molecules could not cross the blood brain barrier by diffusion. These results are in good agreement with works that show that serotonin and Trp do not cross the BBB by simple diffusion.

  6. Modeling amorphization of tetrahedral structures under local approaches

    International Nuclear Information System (INIS)

    Jesurum, C.E.; Pulim, V.; Berger, B.; Hobbs, L.W.

    1997-01-01

    Many crystalline ceramics can be topologically disordered (amorphized) by disordering radiation events involving high-energy collision cascades or (in some cases) successive single-atom displacements. The authors are interested in both the potential for disorder and the possible aperiodic structures adopted following the disordering event. The potential for disordering is related to connectivity, and among those structures of interest are tetrahedral networks (such as SiO 2 , SiC and Si 3 N 4 ) comprising corner-shared tetrahedral units whose connectivities are easily evaluated. In order to study the response of these networks to radiation, the authors have chosen to model their assembly according to the (simple) local rules that each corner obeys in connecting to another tetrahedron; in this way they easily erect large computer models of any crystalline polymorphic form. Amorphous structures can be similarly grown by application of altered rules. They have adopted a simple model of irradiation in which all bonds in the neighborhood of a designated tetrahedron are destroyed, and they reform the bonds in this region according to a set of (possibly different) local rules appropriate to the environmental conditions. When a tetrahedron approaches the boundary of this neighborhood, it undergoes an optimization step in which a spring is inserted between two corners of compatible tetrahedra when they are within a certain distance of one another; component forces are then applied that act to minimize the distance between these corners and minimize the deviation from the rules. The resulting structure is then analyzed for the complete adjacency matrix, irreducible ring statistics, and bond angle distributions

  7. An interdisciplinary approach to modeling tritium transfer into the environment

    International Nuclear Information System (INIS)

    Galeriu, D; Melintescu, A.

    2005-01-01

    equations between soil and plants. Considering mammals, we recently showed that the simplistic models currently applied did not accurately match experimental data from rats and sheep. Specific data for many farm and wild animals are scarce. In this paper, we are advancing a different approach based on energy metabolism, which can be parameterized predominantly based on published metabolic data for mature mammals. We started with the observation that the measured dynamics of 14 C and non-exchangeable organically bound tritium (OBT) were, not surprisingly, similar. We therefore introduced a metabolic definition for the 14 C and OBT loss rate (assumed to be the same) from the whole body and specific organs. We assumed that this was given by the specific metabolic rate of the whole body or organ, divided by the enthalpy of combustion of a kilogram of fresh matter. Since basal metabolism data were taken from the literature, they were modified for energy expenditure above basal need. To keep the model simple, organs were grouped according to their metabolic activity or importance in the food chain. Pools considered were viscera (high metabolic rate organs except the brain), muscle, adipose tissue, blood, and other (all other tissues). We disregarded any detail on substrate utilization from the dietary intake and condensed the postprandial respiration in a single rate. We included considerations of net maintenance and growth needs. For tritium, the transfer between body water and organic compartments was modeled using knowledge of basic metabolism and published relations. We considered the potential influence of rumen digestion and bacterial protein in ruminants. As for model application, we focused on laboratory and farm animals, where some experimental data were available. The model performed well for rat muscle, viscera and adipose tissue, but due to the simplicity of model structure and assumptions, blood and urine data were only satisfactorily reproduced. Whilst for sheep fed

  8. Perinatal administration of aromatase inhibitors in rodents as animal models of human male homosexuality: similarities and differences.

    Science.gov (United States)

    Olvera-Hernández, Sandra; Fernández-Guasti, Alonso

    2015-01-01

    In this chapter we briefly review the evidence supporting the existence of biological influences on sexual orientation. We focus on basic research studies that have affected the estrogen synthesis during the critical periods of brain sexual differentiation in male rat offspring with the use of aromatase inhibitors, such as 1,4,6-androstatriene-3,17 (ATD) and letrozole. The results after prenatal and/or postnatal treatment with ATD reveal that these animals, when adults, show female sexual responses, such as lordosis or proceptive behaviors, but retain their ability to display male sexual activity with a receptive female. Interestingly, the preference and sexual behavior of these rats vary depending upon the circadian rhythm.Recently, we have established that the treatment with low doses of letrozole during the second half of pregnancy produces male rat offspring, that when adults spend more time in the company of a sexually active male than with a receptive female in a preference test. In addition, they display female sexual behavior when forced to interact with a sexually experienced male and some typical male sexual behavior when faced with a sexually receptive female. Interestingly, these males displayed both sexual behavior patterns spontaneously, i.e., in absence of exogenous steroid hormone treatment. Most of these features correspond with those found in human male homosexuals; however, the "bisexual" behavior shown by the letrozole-treated rats may be related to a particular human population. All these data, taken together, permit to propose letrozole prenatal treatment as a suitable animal model to study human male homosexuality and reinforce the hypothesis that human sexual orientation is underlied by changes in the endocrine milieu during early development.

  9. Agribusiness model approach to territorial food development

    Directory of Open Access Journals (Sweden)

    Murcia Hector Horacio

    2011-04-01

    Full Text Available

    Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.

  10. Engineering approach to modeling of piled systems

    International Nuclear Information System (INIS)

    Coombs, R.F.; Silva, M.A.G. da

    1980-01-01

    Available methods of analysis of piled systems subjected to dynamic excitation invade areas of mathematics usually beyond the reach of a practising engineer. A simple technique that avoids that conflict is proposed, at least for preliminary studies, and its application, compared with other methods, is shown to be satisfactory. A corrective factor for parameters currently used to represent transmitting boundaries is derived for a finite strip that models an infinite layer. The influence of internal damping on the dynamic stiffness of the layer and on radiation damping is analysed. (Author) [pt

  11. Jackiw-Pi model: A superfield approach

    Science.gov (United States)

    Gupta, Saurabh

    2014-12-01

    We derive the off-shell nilpotent and absolutely anticommuting Becchi-Rouet-Stora-Tyutin (BRST) as well as anti-BRST transformations s ( a) b corresponding to the Yang-Mills gauge transformations of 3D Jackiw-Pi model by exploiting the "augmented" super-field formalism. We also show that the Curci-Ferrari restriction, which is a hallmark of any non-Abelian 1-form gauge theories, emerges naturally within this formalism and plays an instrumental role in providing the proof of absolute anticommutativity of s ( a) b .

  12. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  13. From scores to face templates: a model-based approach.

    Science.gov (United States)

    Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar

    2007-12-01

    Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With

  14. Implicit moral evaluations: A multinomial modeling approach.

    Science.gov (United States)

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Modeling Saturn's Inner Plasmasphere: Cassini's Closest Approach

    Science.gov (United States)

    Moore, L.; Mendillo, M.

    2005-05-01

    Ion densities from the three-dimensional Saturn-Thermosphere-Ionosphere-Model (STIM, Moore et al., 2004) are extended above the plasma exobase using the formalism of Pierrard and Lemaire (1996, 1998), which evaluates the balance of gravitational, centrifugal and electric forces on the plasma. The parameter space of low-energy ionospheric contributions to Saturn's plasmasphere is explored by comparing results that span the observed extremes of plasma temperature, 650 K to 1700 K, and a range of velocity distributions, Lorentzian (or Kappa) to Maxwellian. Calculations are made for plasma densities along the path of the Cassini spacecraft's orbital insertion on 1 July 2004. These calculations neglect any ring or satellite sources of plasma, which are most likely minor contributors at 1.3 Saturn radii. Modeled densities will be compared with Cassini measurements as they become available. Moore, L.E., M. Mendillo, I.C.F. Mueller-Wodarg, and D.L. Murr, Icarus, 172, 503-520, 2004. Pierrard, V. and J. Lemaire, J. Geophys. Res., 101, 7923-7934, 1996. Pierrard, V. and J. Lemaire, J. Geophys. Res., 103, 4117, 1998.

  16. Keyring models: An approach to steerability

    Science.gov (United States)

    Miller, Carl A.; Colbeck, Roger; Shi, Yaoyun

    2018-02-01

    If a measurement is made on one half of a bipartite system, then, conditioned on the outcome, the other half has a new reduced state. If these reduced states defy classical explanation—that is, if shared randomness cannot produce these reduced states for all possible measurements—the bipartite state is said to be steerable. Determining which states are steerable is a challenging problem even for low dimensions. In the case of two-qubit systems, a criterion is known for T-states (that is, those with maximally mixed marginals) under projective measurements. In the current work, we introduce the concept of keyring models—a special class of local hidden state models. When the measurements made correspond to real projectors, these allow us to study steerability beyond T-states. Using keyring models, we completely solve the steering problem for real projective measurements when the state arises from mixing a pure two-qubit state with uniform noise. We also give a partial solution in the case when the uniform noise is replaced by independent depolarizing channels.

  17. Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches

    Science.gov (United States)

    Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem

    2014-01-01

    Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…

  18. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    NARCIS (Netherlands)

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both

  19. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body

  20. A Bayesian approach for quantification of model uncertainty

    International Nuclear Information System (INIS)

    Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.

    2010-01-01

    In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.

  1. A Networks Approach to Modeling Enzymatic Reactions.

    Science.gov (United States)

    Imhof, P

    2016-01-01

    Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes. © 2016 Elsevier Inc. All rights reserved.

  2. Carbonate rock depositional models: A microfacies approach

    Energy Technology Data Exchange (ETDEWEB)

    Carozzi, A.V.

    1988-01-01

    Carbonate rocks contain more than 50% by weight carbonate minerals such as calcite, dolomite, and siderite. Understanding how these rocks form can lead to more efficient methods of petroleum exploration. Micofacies analysis techniques can be used as a method of predicting models of sedimentation for carbonate rocks. Micofacies in carbonate rocks can be seen clearly only in thin sections under a microscope. This section analysis of carbonate rocks is a tool that can be used to understand depositional environments, diagenetic evolution of carbonate rocks, and the formation of porosity and permeability in carbonate rocks. The use of micofacies analysis techniques is applied to understanding the origin and formation of carbonate ramps, carbonate platforms, and carbonate slopes and basins. This book will be of interest to students and professionals concerned with the disciplines of sedimentary petrology, sedimentology, petroleum geology, and palentology.

  3. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  4. A dual model approach to ground water recovery trench design

    International Nuclear Information System (INIS)

    Clodfelter, C.L.; Crouch, M.S.

    1992-01-01

    The design of trenches for contaminated ground water recovery must consider several variables. This paper presents a dual-model approach for effectively recovering contaminated ground water migrating toward a trench by advection. The approach involves an analytical model to determine the vertical influence of the trench and a numerical flow model to determine the capture zone within the trench and the surrounding aquifer. The analytical model is utilized by varying trench dimensions and head values to design a trench which meets the remediation criteria. The numerical flow model is utilized to select the type of backfill and location of sumps within the trench. The dual-model approach can be used to design a recovery trench which effectively captures advective migration of contaminants in the vertical and horizontal planes

  5. Estimating serial correlation and self-similarity in financial time series-A diversification approach with applications to high frequency data

    Science.gov (United States)

    Gerlich, Nikolas; Rostek, Stefan

    2015-09-01

    We derive a heuristic method to estimate the degree of self-similarity and serial correlation in financial time series. Especially, we propagate the use of a tailor-made selection of different estimation techniques that are used in various fields of time series analysis but until now have not consequently found their way into the finance literature. Following the idea of portfolio diversification, we show that considerable improvements with respect to robustness and unbiasedness can be achieved by using a basket of estimation methods. With this methodological toolbox at hand, we investigate real market data to show that noticeable deviations from the assumptions of constant self-similarity and absence of serial correlation occur during certain periods. On the one hand, this may shed a new light on seemingly ambiguous scientific findings concerning serial correlation of financial time series. On the other hand, a proven time-changing degree of self-similarity may help to explain high-volatility clusters of stock price indices.

  6. Virtuous organization: A structural equation modeling approach

    Directory of Open Access Journals (Sweden)

    Majid Zamahani

    2013-02-01

    Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.

  7. A systemic approach for modeling soil functions

    Science.gov (United States)

    Vogel, Hans-Jörg; Bartke, Stephan; Daedlow, Katrin; Helming, Katharina; Kögel-Knabner, Ingrid; Lang, Birgit; Rabot, Eva; Russell, David; Stößel, Bastian; Weller, Ulrich; Wiesmeier, Martin; Wollschläger, Ute

    2018-03-01

    The central importance of soil for the functioning of terrestrial systems is increasingly recognized. Critically relevant for water quality, climate control, nutrient cycling and biodiversity, soil provides more functions than just the basis for agricultural production. Nowadays, soil is increasingly under pressure as a limited resource for the production of food, energy and raw materials. This has led to an increasing demand for concepts assessing soil functions so that they can be adequately considered in decision-making aimed at sustainable soil management. The various soil science disciplines have progressively developed highly sophisticated methods to explore the multitude of physical, chemical and biological processes in soil. It is not obvious, however, how the steadily improving insight into soil processes may contribute to the evaluation of soil functions. Here, we present to a new systemic modeling framework that allows for a consistent coupling between reductionist yet observable indicators for soil functions with detailed process understanding. It is based on the mechanistic relationships between soil functional attributes, each explained by a network of interacting processes as derived from scientific evidence. The non-linear character of these interactions produces stability and resilience of soil with respect to functional characteristics. We anticipate that this new conceptional framework will integrate the various soil science disciplines and help identify important future research questions at the interface between disciplines. It allows the overwhelming complexity of soil systems to be adequately coped with and paves the way for steadily improving our capability to assess soil functions based on scientific understanding.

  8. Modeling of phase equilibria with CPA using the homomorph approach

    DEFF Research Database (Denmark)

    Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios

    2011-01-01

    For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...

  9. Modular Modelling and Simulation Approach - Applied to Refrigeration Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær; Stoustrup, Jakob

    2008-01-01

    This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...

  10. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  11. The Intersystem Model of Psychotherapy: An Integrated Systems Treatment Approach

    Science.gov (United States)

    Weeks, Gerald R.; Cross, Chad L.

    2004-01-01

    This article introduces the intersystem model of psychotherapy and discusses its utility as a truly integrative and comprehensive approach. The foundation of this conceptually complex approach comes from dialectic metatheory; hence, its derivation requires an understanding of both foundational and integrational constructs. The article provides a…

  12. Bystander Approaches: Empowering Students to Model Ethical Sexual Behavior

    Science.gov (United States)

    Lynch, Annette; Fleming, Wm. Michael

    2005-01-01

    Sexual violence on college campuses is well documented. Prevention education has emerged as an alternative to victim-- and perpetrator--oriented approaches used in the past. One sexual violence prevention education approach focuses on educating and empowering the bystander to become a point of ethical intervention. In this model, bystanders to…

  13. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  14. Numerical approaches to expansion process modeling

    Directory of Open Access Journals (Sweden)

    G. V. Alekseev

    2017-01-01

    Full Text Available Forage production is currently undergoing a period of intensive renovation and introduction of the most advanced technologies and equipment. More and more often such methods as barley toasting, grain extrusion, steaming and grain flattening, boiling bed explosion, infrared ray treatment of cereals and legumes, followed by flattening, and one-time or two-time granulation of the purified whole grain without humidification in matrix presses By grinding the granules. These methods require special apparatuses, machines, auxiliary equipment, created on the basis of different methods of compiled mathematical models. When roasting, simulating the heat fields arising in the working chamber, provide such conditions, the decomposition of a portion of the starch to monosaccharides, which makes the grain sweetish, but due to protein denaturation the digestibility of the protein and the availability of amino acids decrease somewhat. Grain is roasted mainly for young animals in order to teach them to eat food at an early age, stimulate the secretory activity of digestion, better development of the masticatory muscles. In addition, the high temperature is detrimental to bacterial contamination and various types of fungi, which largely avoids possible diseases of the gastrointestinal tract. This method has found wide application directly on the farms. Apply when used in feeding animals and legumes: peas, soy, lupine and lentils. These feeds are preliminarily ground, and then cooked or steamed for 1 hour for 30–40 minutes. In the feed mill. Such processing of feeds allows inactivating the anti-nutrients in them, which reduce the effectiveness of their use. After processing, legumes are used as protein supplements in an amount of 25–30% of the total nutritional value of the diet. But it is recommended to cook and steal a grain of good quality. A poor-quality grain that has been stored for a long time and damaged by pathogenic micro flora is subject to

  15. Modelling and Generating Ajax Applications : A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction

  16. Understanding Gulf War Illness: An Integrative Modeling Approach

    Science.gov (United States)

    2017-10-01

    using a novel mathematical model. The computational biology approach will enable the consortium to quickly identify targets of dysfunction and find... computer / mathematical paradigms for evaluation of treatment strategies 12-30 50% Develop pilot clinical trials on basis of animal studies 24-36 60...the goal of testing chemical treatments. The immune and autonomic biomarkers will be tested using a computational modeling approach allowing for a

  17. A Structural Modeling Approach to a Multilevel Random Coefficients Model.

    Science.gov (United States)

    Rovine, Michael J.; Molenaar, Peter C. M.

    2000-01-01

    Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)

  18. Data Analysis A Model Comparison Approach, Second Edition

    CERN Document Server

    Judd, Charles M; Ryan, Carey S

    2008-01-01

    This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T

  19. Fold-recognition and comparative modeling of human α2,3-sialyltransferases reveal their sequence and structural similarities to CstII from Campylobacter jejuni

    Directory of Open Access Journals (Sweden)

    Balaji Petety V

    2006-04-01

    Full Text Available Abstract Background The 3-D structure of none of the eukaryotic sialyltransferases (SiaTs has been determined so far. Sequence alignment algorithms such as BLAST and PSI-BLAST could not detect a homolog of these enzymes from the protein databank. SiaTs, thus, belong to the hard/medium target category in the CASP experiments. The objective of the current work is to model the 3-D structures of human SiaTs which transfer the sialic acid in α2,3-linkage viz., ST3Gal I, II, III, IV, V, and VI, using fold-recognition and comparative modeling methods. The pair-wise sequence similarity among these six enzymes ranges from 41 to 63%. Results Unlike the sequence similarity servers, fold-recognition servers identified CstII, a α2,3/8 dual-activity SiaT from Campylobacter jejuni as the homolog of all the six ST3Gals; the level of sequence similarity between CstII and ST3Gals is only 15–20% and the similarity is restricted to well-characterized motif regions of ST3Gals. Deriving template-target sequence alignments for the entire ST3Gal sequence was not straightforward: the fold-recognition servers could not find a template for the region preceding the L-motif and that between the L- and S-motifs. Multiple structural templates were identified to model these regions and template identification-modeling-evaluation had to be performed iteratively to choose the most appropriate templates. The modeled structures have acceptable stereochemical properties and are also able to provide qualitative rationalizations for some of the site-directed mutagenesis results reported in literature. Apart from the predicted models, an unexpected but valuable finding from this study is the sequential and structural relatedness of family GT42 and family GT29 SiaTs. Conclusion The modeled 3-D structures can be used for docking and other modeling studies and for the rational identification of residues to be mutated to impart desired properties such as altered stability, substrate

  20. A novel approach to modeling and diagnosing the cardiovascular system

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)

    1995-07-01

    A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  1. Synthesis of industrial applications of local approach to fracture models

    International Nuclear Information System (INIS)

    Eripret, C.

    1993-03-01

    This report gathers different applications of local approach to fracture models to various industrial configurations, such as nuclear pressure vessel steel, cast duplex stainless steels, or primary circuit welds such as bimetallic welds. As soon as models are developed on the basis of microstructural observations, damage mechanisms analyses, and fracture process, the local approach to fracture proves to solve problems where classical fracture mechanics concepts fail. Therefore, local approach appears to be a powerful tool, which completes the standard fracture criteria used in nuclear industry by exhibiting where and why those classical concepts become unvalid. (author). 1 tab., 18 figs., 25 refs

  2. Mathematical models for therapeutic approaches to control HIV disease transmission

    CERN Document Server

    Roy, Priti Kumar

    2015-01-01

    The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...

  3. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  4. A Model-Driven Approach for Telecommunications Network Services Definition

    Science.gov (United States)

    Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.

    Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.

  5. An approach for activity-based DEVS model specification

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2016-01-01

    Creation of DEVS models has been advanced through Model Driven Architecture and its frameworks. The overarching role of the frameworks has been to help develop model specifications in a disciplined fashion. Frameworks can provide intermediary layers between the higher level mathematical models...... and their corresponding software specifications from both structural and behavioral aspects. Unlike structural modeling, developing models to specify behavior of systems is known to be harder and more complex, particularly when operations with non-trivial control schemes are required. In this paper, we propose specifying...... activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...

  6. Modelling diversity in building occupant behaviour: a novel statistical approach

    DEFF Research Database (Denmark)

    Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm

    2016-01-01

    We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...

  7. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  8. A qualitative evaluation approach for energy system modelling frameworks

    DEFF Research Database (Denmark)

    Wiese, Frauke; Hilpert, Simon; Kaldemeyer, Cord

    2018-01-01

    properties define how useful it is in regard to the existing challenges. For energy system models, evaluation methods exist, but we argue that many decisions upon properties are rather made on the model generator or framework level. Thus, this paper presents a qualitative approach to evaluate frameworks...

  9. Modeling Alaska boreal forests with a controlled trend surface approach

    Science.gov (United States)

    Mo Zhou; Jingjing Liang

    2012-01-01

    An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...

  10. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  11. Towards modeling future energy infrastructures - the ELECTRA system engineering approach

    DEFF Research Database (Denmark)

    Uslar, Mathias; Heussen, Kai

    2016-01-01

    of the IEC 62559 use case template as well as needed changes to cope particularly with the aspects of controller conflicts and Greenfield technology modeling. From the original envisioned use of the standards, we show a possible transfer on how to properly deal with a Greenfield approach when modeling....

  12. A Model-Driven Approach to e-Course Management

    Science.gov (United States)

    Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana

    2018-01-01

    This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…

  13. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  14. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  15. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...... of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented...

  16. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  17. Learning the Task Management Space of an Aircraft Approach Model

    Science.gov (United States)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  18. A novel approach of modeling continuous dark hydrogen fermentation.

    Science.gov (United States)

    Alexandropoulou, Maria; Antonopoulou, Georgia; Lyberatos, Gerasimos

    2018-02-01

    In this study a novel modeling approach for describing fermentative hydrogen production in a continuous stirred tank reactor (CSTR) was developed, using the Aquasim modeling platform. This model accounts for the key metabolic reactions taking place in a fermentative hydrogen producing reactor, using fixed stoichiometry but different reaction rates. Biomass yields are determined based on bioenergetics. The model is capable of describing very well the variation in the distribution of metabolic products for a wide range of hydraulic retention times (HRT). The modeling approach is demonstrated using the experimental data obtained from a CSTR, fed with food industry waste (FIW), operating at different HRTs. The kinetic parameters were estimated through fitting to the experimental results. Hydrogen and total biogas production rates were predicted very well by the model, validating the basic assumptions regarding the implicated stoichiometric biochemical reactions and their kinetic rates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. An integrated modeling approach to age invariant face recognition

    Science.gov (United States)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  20. EVALUATION OF ASSEMBLY LINE BALANCING METHODS USING AN ANALYTICAL HIERARCHY PROCESS (AHP AND TECHNIQUE FOR ORDER PREFERENCES BY SIMILARITY TO IDEAL SOLUTION (TOPSIS BASED APPROACH

    Directory of Open Access Journals (Sweden)

    Pallavi Sharma

    2013-12-01

    Full Text Available Assembly lines are special flow-line production systems which are of great importance in the industrial production of high quantity standardized commodities. In this article, assembly line balancing problem is formulated as a multi objective (criteria problem where four easily quantifiable objectives (criteria's are defined. Objectives (criteria's included are line efficiency, balance delay, smoothness index, and line time. And the value of these objectives is calculated by five different heuristics. In this paper, focus is made on the prioritization of assembly line balancing (ALB solution methods (heuristics and to select the best of them. For this purpose, a bench mark assembly line balancing problem is solved by five different heuristics and the value of objectives criteria's (performance measures of the line is determined. Finally the prioritization of heuristics is carried out through the use of AHP-TOPSIS based approach by solving an example.

  1. Spatial pattern evaluation of a calibrated national hydrological model - a remote-sensing-based diagnostic approach

    Science.gov (United States)

    Mendiguren, Gorka; Koch, Julian; Stisen, Simon

    2017-11-01

    Distributed hydrological models are traditionally evaluated against discharge stations, emphasizing the temporal and neglecting the spatial component of a model. The present study widens the traditional paradigm by highlighting spatial patterns of evapotranspiration (ET), a key variable at the land-atmosphere interface, obtained from two different approaches at the national scale of Denmark. The first approach is based on a national water resources model (DK-model), using the MIKE-SHE model code, and the second approach utilizes a two-source energy balance model (TSEB) driven mainly by satellite remote sensing data. Ideally, the hydrological model simulation and remote-sensing-based approach should present similar spatial patterns and driving mechanisms of ET. However, the spatial comparison showed that the differences are significant and indicate insufficient spatial pattern performance of the hydrological model.The differences in spatial patterns can partly be explained by the fact that the hydrological model is configured to run in six domains that are calibrated independently from each other, as it is often the case for large-scale multi-basin calibrations. Furthermore, the model incorporates predefined temporal dynamics of leaf area index (LAI), root depth (RD) and crop coefficient (Kc) for each land cover type. This zonal approach of model parameterization ignores the spatiotemporal complexity of the natural system. To overcome this limitation, this study features a modified version of the DK-model in which LAI, RD and Kc are empirically derived using remote sensing data and detailed soil property maps in order to generate a higher degree of spatiotemporal variability and spatial consistency between the six domains. The effects of these changes are analyzed by using empirical orthogonal function (EOF) analysis to evaluate spatial patterns. The EOF analysis shows that including remote-sensing-derived LAI, RD and Kc in the distributed hydrological model adds

  2. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  3. On a model-based approach to radiation protection

    International Nuclear Information System (INIS)

    Waligorski, M.P.R.

    2002-01-01

    There is a preoccupation with linearity and absorbed dose as the basic quantifiers of radiation hazard. An alternative is the fluence approach, whereby radiation hazard may be evaluated, at least in principle, via an appropriate action cross section. In order to compare these approaches, it may be useful to discuss them as quantitative descriptors of survival and transformation-like endpoints in cell cultures in vitro - a system thought to be relevant to modelling radiation hazard. If absorbed dose is used to quantify these biological endpoints, then non-linear dose-effect relations have to be described, and, e.g. after doses of densely ionising radiation, dose-correction factors as high as 20 are required. In the fluence approach only exponential effect-fluence relationships can be readily described. Neither approach alone exhausts the scope of experimentally observed dependencies of effect on dose or fluence. Two-component models, incorporating a suitable mixture of the two approaches, are required. An example of such a model is the cellular track structure theory developed by Katz over thirty years ago. The practical consequences of modelling radiation hazard using this mixed two-component approach are discussed. (author)

  4. A generalized approach for historical mock-up acquisition and data modelling: Towards historically enriched 3D city models

    Science.gov (United States)

    Hervy, B.; Billen, R.; Laroche, F.; Carré, C.; Servières, M.; Van Ruymbeke, M.; Tourre, V.; Delfosse, V.; Kerouanton, J.-L.

    2012-10-01

    Museums are filled with hidden secrets. One of those secrets lies behind historical mock-ups whose signification goes far behind a simple representation of a city. We face the challenge of designing, storing and showing knowledge related to these mock-ups in order to explain their historical value. Over the last few years, several mock-up digitalisation projects have been realised. Two of them, Nantes 1900 and Virtual Leodium, propose innovative approaches that present a lot of similarities. This paper presents a framework to go one step further by analysing their data modelling processes and extracting what could be a generalized approach to build a numerical mock-up and the knowledge database associated. Geometry modelling and knowledge modelling influence each other and are conducted in a parallel process. Our generalized approach describes a global overview of what can be a data modelling process. Our next goal is obviously to apply this global approach on other historical mock-up, but we also think about applying it to other 3D objects that need to embed semantic data, and approaching historically enriched 3D city models.

  5. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    Science.gov (United States)

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  6. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  7. Using Patient Health Questionnaire-9 item parameters of a common metric resulted in similar depression scores compared to independent item response theory model reestimation.

    Science.gov (United States)

    Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix

    2016-03-01

    To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. A review of function modeling: Approaches and applications

    OpenAIRE

    Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.

    2008-01-01

    This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research fields of artificial intelligence, design theory, and maintenance are discussed. In this discussion the goals are to highlight the features of various classical approaches in relation to FM, to delin...

  9. Top-down approach to unified supergravity models

    International Nuclear Information System (INIS)

    Hempfling, R.

    1994-03-01

    We introduce a new approach for studying unified supergravity models. In this approach all the parameters of the grand unified theory (GUT) are fixed by imposing the corresponding number of low energy observables. This determines the remaining particle spectrum whose dependence on the low energy observables can now be investigated. We also include some SUSY threshold corrections that have previously been neglected. In particular the SUSY threshold corrections to the fermion masses can have a significant impact on the Yukawa coupling unification. (orig.)

  10. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  11. An object-oriented approach to energy-economic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wise, M.A.; Fox, J.A.; Sands, R.D.

    1993-12-01

    In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.

  12. Multi-model approach to characterize human handwriting motion.

    Science.gov (United States)

    Chihi, I; Abdelkrim, A; Benrejeb, M

    2016-02-01

    This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.

  13. Wave Resource Characterization Using an Unstructured Grid Modeling Approach

    Directory of Open Access Journals (Sweden)

    Wei-Cheng Wu

    2018-03-01

    Full Text Available This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization, using the unstructured grid Simulating WAve Nearshore (SWAN model coupled with a nested grid WAVEWATCH III® (WWIII model. The flexibility of models with various spatial resolutions and the effects of open boundary conditions simulated by a nested grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured grid-modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Center Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the ST2 physics package’s ability to predict wave power density for large waves, which is important for wave resource assessment, load calculation of devices, and risk management. In addition, bivariate distributions show that the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than with the ST2 physics package. This study demonstrated that the unstructured grid wave modeling approach, driven by regional nested grid WWIII outputs along with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (102 km.

  14. Both food restriction and high-fat diet during gestation induce low birth weight and altered physical activity in adult rat offspring: the "Similarities in the Inequalities" model.

    Directory of Open Access Journals (Sweden)

    Fábio da Silva Cunha

    Full Text Available We have previously described a theoretical model in humans, called "Similarities in the Inequalities", in which extremely unequal social backgrounds coexist in a complex scenario promoting similar health outcomes in adulthood. Based on the potential applicability of and to further explore the "similarities in the inequalities" phenomenon, this study used a rat model to investigate the effect of different nutritional backgrounds during gestation on the willingness of offspring to engage in physical activity in adulthood. Sprague-Dawley rats were time mated and randomly allocated to one of three dietary groups: Control (Adlib, receiving standard laboratory chow ad libitum; 50% food restricted (FR, receiving 50% of the ad libitum-fed dam's habitual intake; or high-fat diet (HF, receiving a diet containing 23% fat. The diets were provided from day 10 of pregnancy until weaning. Within 24 hours of birth, pups were cross-fostered to other dams, forming the following groups: Adlib_Adlib, FR_Adlib, and HF_Adlib. Maternal chow consumption and weight gain, and offspring birth weight, growth, physical activity (one week of free exercise in running wheels, abdominal adiposity and biochemical data were evaluated. Western blot was performed to assess D2 receptors in the dorsal striatum. The "similarities in the inequalities" effect was observed on birth weight (both FR and HF groups were smaller than the Adlib group at birth and physical activity (both FR_Adlib and HF_Adlib groups were different from the Adlib_Adlib group, with less active males and more active females. Our findings contribute to the view that health inequalities in fetal life may program the health outcomes manifested in offspring adult life (such as altered physical activity and metabolic parameters, probably through different biological mechanisms.

  15. A comprehensive dynamic modeling approach for giant magnetostrictive material actuators

    International Nuclear Information System (INIS)

    Gu, Guo-Ying; Zhu, Li-Min; Li, Zhi; Su, Chun-Yi

    2013-01-01

    In this paper, a comprehensive modeling approach for a giant magnetostrictive material actuator (GMMA) is proposed based on the description of nonlinear electromagnetic behavior, the magnetostrictive effect and frequency response of the mechanical dynamics. It maps the relationships between current and magnetic flux at the electromagnetic part to force and displacement at the mechanical part in a lumped parameter form. Towards this modeling approach, the nonlinear hysteresis effect of the GMMA appearing only in the electrical part is separated from the linear dynamic plant in the mechanical part. Thus, a two-module dynamic model is developed to completely characterize the hysteresis nonlinearity and the dynamic behaviors of the GMMA. The first module is a static hysteresis model to describe the hysteresis nonlinearity, and the cascaded second module is a linear dynamic plant to represent the dynamic behavior. To validate the proposed dynamic model, an experimental platform is established. Then, the linear dynamic part and the nonlinear hysteresis part of the proposed model are identified in sequence. For the linear part, an approach based on axiomatic design theory is adopted. For the nonlinear part, a Prandtl–Ishlinskii model is introduced to describe the hysteresis nonlinearity and a constrained quadratic optimization method is utilized to identify its coefficients. Finally, experimental tests are conducted to demonstrate the effectiveness of the proposed dynamic model and the corresponding identification method. (paper)

  16. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  17. Smeared crack modelling approach for corrosion-induced concrete damage

    DEFF Research Database (Denmark)

    Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik

    2017-01-01

    In this paper a smeared crack modelling approach is used to simulate corrosion-induced damage in reinforced concrete. The presented modelling approach utilizes a thermal analogy to mimic the expansive nature of solid corrosion products, while taking into account the penetration of corrosion...... products into the surrounding concrete, non-uniform precipitation of corrosion products, and creep. To demonstrate the applicability of the presented modelling approach, numerical predictions in terms of corrosion-induced deformations as well as formation and propagation of micro- and macrocracks were......-induced damage phenomena in reinforced concrete. Moreover, good agreements were also found between experimental and numerical data for corrosion-induced deformations along the circumference of the reinforcement....

  18. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  19. An algebraic approach to modeling in software engineering

    International Nuclear Information System (INIS)

    Loegel, C.J.; Ravishankar, C.V.

    1993-09-01

    Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form

  20. Towards a 3d Spatial Urban Energy Modelling Approach

    Science.gov (United States)

    Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.

    2013-09-01

    Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies

  1. A novel approach for runoff modelling in ungauged catchments by Catchment Morphing

    Science.gov (United States)

    Zhang, J.; Han, D.

    2017-12-01

    Runoff prediction in ungauged catchments has been one of the major challenges in the past decades. However, due to the tremendous heterogeneity of hydrological catchments, obstacles exist in deducing model parameters for ungauged catchments from gauged ones. We propose a novel approach to predict ungauged runoff with Catchment Morphing (CM) using a fully distributed model. CM is defined as by changing the catchment characteristics (area and slope here) from the baseline model built with a gauged catchment to model the ungauged ones. The advantages of CM are: (a) less demand of the similarity between the baseline catchment and the ungauged catchment, (b) less demand of available data, and (c) potentially applicable in varied catchments. A case study on seven catchments in the UK has been used to demonstrate the proposed scheme. To comprehensively examine the CM approach, distributed rainfall inputs are utilised in the model, and fractal landscapes are used to morph the land surface from the baseline model to the target model. The preliminary results demonstrate the feasibility of the approach, which is promising in runoff simulation for ungauged catchments. Clearly, more work beyond this pilot study is needed to explore and develop this new approach further to maturity by the hydrological community.

  2. Contextual Factors for Finding Similar Experts

    DEFF Research Database (Denmark)

    Hofmann, Katja; Balog, Krisztian; Bogers, Toine

    2010-01-01

    -seeking models, are rarely taken into account. In this article, we extend content-based expert-finding approaches with contextual factors that have been found to influence human expert finding. We focus on a task of science communicators in a knowledge-intensive environment, the task of finding similar experts......, given an example expert. Our approach combines expertise-seeking and retrieval research. First, we conduct a user study to identify contextual factors that may play a role in the studied task and environment. Then, we design expert retrieval models to capture these factors. We combine these with content......-based retrieval models and evaluate them in a retrieval experiment. Our main finding is that while content-based features are the most important, human participants also take contextual factors into account, such as media experience and organizational structure. We develop two principled ways of modeling...

  3. Modelling of ductile and cleavage fracture by local approach

    International Nuclear Information System (INIS)

    Samal, M.K.; Dutta, B.K.; Kushwaha, H.S.

    2000-08-01

    This report describes the modelling of ductile and cleavage fracture processes by local approach. It is now well known that the conventional fracture mechanics method based on single parameter criteria is not adequate to model the fracture processes. It is because of the existence of effect of size and geometry of flaw, loading type and rate on the fracture resistance behaviour of any structure. Hence, it is questionable to use same fracture resistance curves as determined from standard tests in the analysis of real life components because of existence of all the above effects. So, there is need to have a method in which the parameters used for the analysis will be true material properties, i.e. independent of geometry and size. One of the solutions to the above problem is the use of local approaches. These approaches have been extensively studied and applied to different materials (including SA33 Gr.6) in this report. Each method has been studied and reported in a separate section. This report has been divided into five sections. Section-I gives a brief review of the fundamentals of fracture process. Section-II deals with modelling of ductile fracture by locally uncoupled type of models. In this section, the critical cavity growth parameters of the different models have been determined for the primary heat transport (PHT) piping material of Indian pressurised heavy water reactor (PHWR). A comparative study has been done among different models. The dependency of the critical parameters on stress triaxiality factor has also been studied. It is observed that Rice and Tracey's model is the most suitable one. But, its parameters are not fully independent of triaxiality factor. For this purpose, a modification to Rice and Tracery's model is suggested in Section-III. Section-IV deals with modelling of ductile fracture process by locally coupled type of models. Section-V deals with the modelling of cleavage fracture process by Beremins model, which is based on Weibulls

  4. Atomistic approach for modeling metal-semiconductor interfaces

    DEFF Research Database (Denmark)

    Stradi, Daniele; Martinez, Umberto; Blom, Anders

    2016-01-01

    realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via the I–V curve. In particular, it will be demonstrated how doping — and bias — modifies the Schottky barrier, and how finite size models (the slab approach) are unable to describe these interfaces......We present a general framework for simulating interfaces using an atomistic approach based on density functional theory and non-equilibrium Green's functions. The method includes all the relevant ingredients, such as doping and an accurate value of the semiconductor band gap, required to model...

  5. Risk evaluation of uranium mining: A geochemical inverse modelling approach

    Science.gov (United States)

    Rillard, J.; Zuddas, P.; Scislewski, A.

    2011-12-01

    reactive mineral surface area. The formation of coatings on dissolving mineral surfaces significantly reduces the amount of surface available to react with fluids. Our results show that negatively charged ion complexes, responsible for U transport, decreases when alkalinity and rock buffer capacity is similarly lower. Carbonate ion pairs however, may increase U mobility when radionuclide concentration is high and rock buffer capacity is low. The present work helps to orient future monitoring of this site in Brazil as well as of other sites where uranium is linked to igneous rock formations, without the presence of sulphides. Monitoring SO4 migration (in acidic leaching uranium sites) seems to be an efficient and simple way to track different hazards, especially in tropical conditions, where the succession of dry and wet periods increases the weathering action of the residual H2SO4. Nevertheless, models of risk evaluation should take into account reactive surface areas and neogenic minerals since they determine the U ion complex formation, which in turn, controls uranium mobility in natural systems. Keywords: uranium mining, reactive mineral surface area, uranium complexes, inverse modelling approach, risk evaluation

  6. Systems and context modeling approach to requirements analysis

    Science.gov (United States)

    Ahuja, Amrit; Muralikrishna, G.; Patwari, Puneet; Subhrojyoti, C.; Swaminathan, N.; Vin, Harrick

    2014-08-01

    Ensuring completeness and correctness of the requirements for a complex system such as the SKA is challenging. Current system engineering practice includes developing a stakeholder needs definition, a concept of operations, and defining system requirements in terms of use cases and requirements statements. We present a method that enhances this current practice into a collection of system models with mutual consistency relationships. These include stakeholder goals, needs definition and system-of-interest models, together with a context model that participates in the consistency relationships among these models. We illustrate this approach by using it to analyze the SKA system requirements.

  7. An approach to multiscale modelling with graph grammars.

    Science.gov (United States)

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  8. Modelling individual differences in the form of Pavlovian conditioned approach responses: a dual learning systems approach with factored representations.

    Directory of Open Access Journals (Sweden)

    Florian Lesaint

    2014-02-01

    Full Text Available Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US, some rats (sign-trackers come to approach and engage the conditioned stimulus (CS itself - a lever - more and more avidly, whereas other rats (goal-trackers learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in

  9. Modelling Individual Differences in the Form of Pavlovian Conditioned Approach Responses: A Dual Learning Systems Approach with Factored Representations

    Science.gov (United States)

    Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B.; Robinson, Terry E.; Khamassi, Mehdi

    2014-01-01

    Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself – a lever – more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in

  10. Modelling individual differences in the form of Pavlovian conditioned approach responses: a dual learning systems approach with factored representations.

    Science.gov (United States)

    Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B; Robinson, Terry E; Khamassi, Mehdi

    2014-02-01

    Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself - a lever - more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational

  11. Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

    OpenAIRE

    Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.

    2016-01-01

    The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...

  12. Technical note: Comparison of methane ebullition modelling approaches used in terrestrial wetland models

    Science.gov (United States)

    Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo

    2018-02-01

    Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.

  13. Environmental Radiation Effects on Mammals A Dynamical Modeling Approach

    CERN Document Server

    Smirnova, Olga A

    2010-01-01

    This text is devoted to the theoretical studies of radiation effects on mammals. It uses the framework of developed deterministic mathematical models to investigate the effects of both acute and chronic irradiation in a wide range of doses and dose rates on vital body systems including hematopoiesis, small intestine and humoral immunity, as well as on the development of autoimmune diseases. Thus, these models can contribute to the development of the system and quantitative approaches in radiation biology and ecology. This text is also of practical use. Its modeling studies of the dynamics of granulocytopoiesis and thrombocytopoiesis in humans testify to the efficiency of employment of the developed models in the investigation and prediction of radiation effects on these hematopoietic lines. These models, as well as the properly identified models of other vital body systems, could provide a better understanding of the radiation risks to health. The modeling predictions will enable the implementation of more ef...

  14. A new approach to Naturalness in SUSY models

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    We review recent results that provide a new approach to the old problem of naturalness in supersymmetric models, without relying on subjective definitions for the fine-tuning associated with {\\it fixing} the EW scale (to its measured value) in the presence of quantum corrections. The approach can address in a model-independent way many questions related to this problem. The results show that naturalness and its measure (fine-tuning) are an intrinsic part of the likelihood to fit the data that {\\it includes} the EW scale. One important consequence is that the additional {\\it constraint} of fixing the EW scale, usually not imposed in the data fits of the models, impacts on their overall likelihood to fit the data (or chi^2/ndf, ndf: number of degrees of freedom). This has negative implications for the viability of currently popular supersymmetric extensions of the Standard Model.

  15. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  16. Merits of a Scenario Approach in Dredge Plume Modelling

    DEFF Research Database (Denmark)

    Pedersen, Claus; Chu, Amy Ling Chu; Hjelmager Jensen, Jacob

    2011-01-01

    Dredge plume modelling is a key tool for quantification of potential impacts to inform the EIA process. There are, however, significant uncertainties associated with the modelling at the EIA stage when both dredging methodology and schedule are likely to be a guess at best as the dredging...... contractor would rarely have been appointed. Simulation of a few variations of an assumed full dredge period programme will generally not provide a good representation of the overall environmental risks associated with the programme. An alternative dredge plume modelling strategy that attempts to encapsulate...... uncertainties associated with preliminary dredging programmes by using a scenario-based modelling approach is presented. The approach establishes a set of representative and conservative scenarios for key factors controlling the spill and plume dispersion and simulates all combinations of e.g. dredge, climatic...

  17. Fast business process similarity search

    NARCIS (Netherlands)

    Yan, Z.; Dijkman, R.M.; Grefen, P.W.P.J.

    2012-01-01

    Nowadays, it is common for organizations to maintain collections of hundreds or even thousands of business processes. Techniques exist to search through such a collection, for business process models that are similar to a given query model. However, those techniques compare the query model to each

  18. Regularization of quantum gravity in the matrix model approach

    International Nuclear Information System (INIS)

    Ueda, Haruhiko

    1991-02-01

    We study divergence problem of the partition function in the matrix model approach for two-dimensional quantum gravity. We propose a new model V(φ) = 1/2Trφ 2 + g 4 /NTrφ 4 + g'/N 4 Tr(φ 4 ) 2 and show that in the sphere case it has no divergence problem and the critical exponent is of pure gravity. (author)

  19. PASSENGER TRAFFIC MOVEMENT MODELLING BY THE CELLULAR-AUTOMAT APPROACH

    Directory of Open Access Journals (Sweden)

    T. Mikhaylovskaya

    2009-01-01

    Full Text Available The mathematical model of passenger traffic movement developed on the basis of the cellular-automat approach is considered. The program realization of the cellular-automat model of pedastrians streams movement in pedestrian subways at presence of obstacles, at subway structure narrowing is presented. The optimum distances between the obstacles and the angle of subway structure narrowing providing pedastrians stream safe movement and traffic congestion occurance are determined.

  20. New Similarity Functions

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina

    2016-01-01

    spaces, in addition to their similarity in the vector space. Prioritized Weighted Feature Distance (PWFD) works similarly as WFD, but provides the ability to give priorities to desirable features. The accuracy of the proposed functions are compared with other similarity functions on several data sets....... Our results show that the proposed functions work better than other methods proposed in the literature....

  1. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    International Nuclear Information System (INIS)

    Klos, Richard

    2008-03-01

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  2. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Klos, Richard

    2008-03-15

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  3. Assessing the polycyclic aromatic hydrocarbon (PAH) pollution of urban stormwater runoff: a dynamic modeling approach.

    Science.gov (United States)

    Zheng, Yi; Lin, Zhongrong; Li, Hao; Ge, Yan; Zhang, Wei; Ye, Youbin; Wang, Xuejun

    2014-05-15

    Urban stormwater runoff delivers a significant amount of polycyclic aromatic hydrocarbons (PAHs), mostly of atmospheric origin, to receiving water bodies. The PAH pollution of urban stormwater runoff poses serious risk to aquatic life and human health, but has been overlooked by environmental modeling and management. This study proposed a dynamic modeling approach for assessing the PAH pollution and its associated environmental risk. A variable time-step model was developed to simulate the continuous cycles of pollutant buildup and washoff. To reflect the complex interaction among different environmental media (i.e. atmosphere, dust and stormwater), the dependence of the pollution level on antecedent weather conditions was investigated and embodied in the model. Long-term simulations of the model can be efficiently performed, and probabilistic features of the pollution level and its risk can be easily determined. The applicability of this approach and its value to environmental management was demonstrated by a case study in Beijing, China. The results showed that Beijing's PAH pollution of road runoff is relatively severe, and its associated risk exhibits notable seasonal variation. The current sweeping practice is effective in mitigating the pollution, but the effectiveness is both weather-dependent and compound-dependent. The proposed modeling approach can help identify critical timing and major pollutants for monitoring, assessing and controlling efforts to be focused on. The approach is extendable to other urban areas, as well as to other contaminants with similar fate and transport as PAHs. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Reduced modeling of signal transduction – a modular approach

    Directory of Open Access Journals (Sweden)

    Ederer Michael

    2007-09-01

    Full Text Available Abstract Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good

  5. Driving clinical study efficiency by using a productivity breakdown model: comparative evaluation of a global clinical study and a similar Japanese study.

    Science.gov (United States)

    Takahashi, K; Sengoku, S; Kimura, H

    2011-02-01

    A fundamental management imperative of pharmaceutical companies is to contain surging costs of developing and launching drugs globally. Clinical studies are a research and development (R&D) cost driver. The objective of this study was to develop a productivity breakdown model, or a key performance indicator (KPI) tree, for an entire clinical study and to use it to compare a global clinical study with a similar Japanese study. We, thereby, hope to identify means of improving study productivity. We developed the new clinical study productivity breakdown model, covering operational aspects and cost factors. Elements for improving clinical study productivity were assessed from a management viewpoint by comparing empirical tracking data from a global clinical study with a Japanese study with similar protocols. The following unique and material differences, beyond simple international difference in cost of living, that could affect the efficiency of future clinical trials were identified: (i) more frequent site visits in the Japanese study, (ii) head counts at the Japanese study sites more than double those of the global study and (iii) a shorter enrollment time window of about a third that of the global study at the Japanese study sites. We identified major differences in the performance of the two studies. These findings demonstrate the potential of the KPI tree for improving clinical study productivity. Trade-offs, such as those between reduction in head count at study sites and expansion of the enrollment time window, must be considered carefully. © 2010 Blackwell Publishing Ltd.

  6. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  7. Modeling and Forecasting Mortality With Economic Growth: A Multipopulation Approach.

    Science.gov (United States)

    Boonen, Tim J; Li, Hong

    2017-10-01

    Research on mortality modeling of multiple populations focuses mainly on extrapolating past mortality trends and summarizing these trends by one or more common latent factors. This article proposes a multipopulation stochastic mortality model that uses the explanatory power of economic growth. In particular, we extend the Li and Lee model (Li and Lee 2005) by including economic growth, represented by the real gross domestic product (GDP) per capita, to capture the common mortality trend for a group of populations with similar socioeconomic conditions. We find that our proposed model provides a better in-sample fit and an out-of-sample forecast performance. Moreover, it generates lower (higher) forecasted period life expectancy for countries with high (low) GDP per capita than the Li and Lee model.

  8. A nonlinear complementarity approach for the national energy modeling system

    International Nuclear Information System (INIS)

    Gabriel, S.A.; Kydes, A.S.

    1995-01-01

    The National Energy Modeling System (NEMS) is a large-scale mathematical model that computes equilibrium fuel prices and quantities in the U.S. energy sector. At present, to generate these equilibrium values, NEMS sequentially solves a collection of linear programs and nonlinear equations. The NEMS solution procedure then incorporates the solutions of these linear programs and nonlinear equations in a nonlinear Gauss-Seidel approach. The authors describe how the current version of NEMS can be formulated as a particular nonlinear complementarity problem (NCP), thereby possibly avoiding current convergence problems. In addition, they show that the NCP format is equally valid for a more general form of NEMS. They also describe several promising approaches for solving the NCP form of NEMS based on recent Newton type methods for general NCPs. These approaches share the feature of needing to solve their direction-finding subproblems only approximately. Hence, they can effectively exploit the sparsity inherent in the NEMS NCP

  9. A Behavioral Decision Making Modeling Approach Towards Hedging Services

    NARCIS (Netherlands)

    Pennings, J.M.E.; Candel, M.J.J.M.; Egelkraut, T.M.

    2003-01-01

    This paper takes a behavioral approach toward the market for hedging services. A behavioral decision-making model is developed that provides insight into how and why owner-managers decide the way they do regarding hedging services. Insight into those choice processes reveals information needed by

  10. Export of microplastics from land to sea. A modelling approach

    NARCIS (Netherlands)

    Siegfried, Max; Koelmans, A.A.; Besseling, E.; Kroeze, C.

    2017-01-01

    Quantifying the transport of plastic debris from river to sea is crucial for assessing the risks of plastic debris to human health and the environment. We present a global modelling approach to analyse the composition and quantity of point-source microplastic fluxes from European rivers to the sea.

  11. The Bipolar Approach: A Model for Interdisciplinary Art History Courses.

    Science.gov (United States)

    Calabrese, John A.

    1993-01-01

    Describes a college level art history course based on the opposing concepts of Classicism and Romanticism. Contends that all creative work, such as film or architecture, can be categorized according to this bipolar model. Includes suggestions for objects to study and recommends this approach for art education at all education levels. (CFR)

  12. Teaching Modeling with Partial Differential Equations: Several Successful Approaches

    Science.gov (United States)

    Myers, Joseph; Trubatch, David; Winkel, Brian

    2008-01-01

    We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…

  13. A review of function modeling : Approaches and applications

    NARCIS (Netherlands)

    Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.

    2008-01-01

    This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research

  14. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.

  15. Model-independent approach for dark matter phenomenology

    Indian Academy of Sciences (India)

    We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detection experiments of dark matter. Once the dark matter is discovered in the ...

  16. Model-independent approach for dark matter phenomenology ...

    Indian Academy of Sciences (India)

    Abstract. We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detec- tion experiments of dark matter. Once the dark matter is discovered ...

  17. The variational approach to the Glashow-Weinberg-Salam model

    International Nuclear Information System (INIS)

    Manka, R.; Sladkowski, J.

    1987-01-01

    The variational approach to the Glashow-Weinberg-Salam model, based on canonical quantization, is presented. It is shown that taking into consideration the Becchi-Rouet-Stora symmetry leads to the correct, temperature-dependent, effective potential. This generalization of the Weinberg-Coleman potential leads to a phase transition of the first kind

  18. Methodological Approach for Modeling of Multienzyme in-pot Processes

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia; Roman Martinez, Alicia; Sin, Gürkan

    2011-01-01

    This paper presents a methodological approach for modeling multi-enzyme in-pot processes. The methodology is exemplified stepwise through the bi-enzymatic production of N-acetyl-D-neuraminic acid (Neu5Ac) from N-acetyl-D-glucosamine (GlcNAc). In this case study, sensitivity analysis is also used ...

  19. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  20. EXTENDE MODEL OF COMPETITIVITY THROUG APPLICATION OF NEW APPROACH DIRECTIVES

    Directory of Open Access Journals (Sweden)

    Slavko Arsovski

    2009-03-01

    Full Text Available The basic subject of this work is the model of new approach impact on quality and safety products, and competency of our companies. This work represents real hypothesis on the basis of expert's experiences, in regard to that the infrastructure with using new approach directives wasn't examined until now, it isn't known which product or industry of Serbia is related to directives of the new approach and CE mark, and it is not known which are effects of the use of the CE mark. This work should indicate existing quality reserves and product's safety, the level of possible competency improvement and increasing the profit by discharging new approach directive requires.