WorldWideScience

Sample records for model similar improvements

  1. Continuous Improvement and Collaborative Improvement: Similarities and Differences

    DEFF Research Database (Denmark)

    Middel, Rick; Boer, Harry; Fisscher, Olaf

    2006-01-01

    the similarities and differences between key components of continuous and collaborative improvement by assessing what is specific for continuous improvement, what for collaborative improvement, and where the two areas of application meet and overlap. The main conclusions are that there are many more similarities...... between continuous and collaborative improvement. The main differences relate to the role of hierarchy/market, trust, power and commitment to collaboration, all of which are related to differences between the settings in which continuous and collaborative improvement unfold....

  2. Improved personalized recommendation based on a similarity network

    Science.gov (United States)

    Wang, Ximeng; Liu, Yun; Xiong, Fei

    2016-08-01

    A recommender system helps individual users find the preferred items rapidly and has attracted extensive attention in recent years. Many successful recommendation algorithms are designed on bipartite networks, such as network-based inference or heat conduction. However, most of these algorithms define the resource-allocation methods for an average allocation. That is not reasonable because average allocation cannot indicate the user choice preference and the influence between users which leads to a series of non-personalized recommendation results. We propose a personalized recommendation approach that combines the similarity function and bipartite network to generate a similarity network that improves the resource-allocation process. Our model introduces user influence into the recommender system and states that the user influence can make the resource-allocation process more reasonable. We use four different metrics to evaluate our algorithms for three benchmark data sets. Experimental results show that the improved recommendation on a similarity network can obtain better accuracy and diversity than some competing approaches.

  3. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  4. Self-similar cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Chao, W Z [Cambridge Univ. (UK). Dept. of Applied Mathematics and Theoretical Physics

    1981-07-01

    The kinematics and dynamics of self-similar cosmological models are discussed. The degrees of freedom of the solutions of Einstein's equations for different types of models are listed. The relation between kinematic quantities and the classifications of the self-similarity group is examined. All dust local rotational symmetry models have been found.

  5. Improved collaborative filtering recommendation algorithm of similarity measure

    Science.gov (United States)

    Zhang, Baofu; Yuan, Baoping

    2017-05-01

    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  6. Inter Genre Similarity Modelling For Automatic Music Genre Classification

    OpenAIRE

    Bagci, Ulas; Erzin, Engin

    2009-01-01

    Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population....

  7. Notions of similarity for systems biology models.

    Science.gov (United States)

    Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knüpfer, Christian; Liebermeister, Wolfram; Waltemath, Dagmar

    2018-01-01

    Systems biology models are rapidly increasing in complexity, size and numbers. When building large models, researchers rely on software tools for the retrieval, comparison, combination and merging of models, as well as for version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of 'similarity' may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here we survey existing methods for the comparison of models, introduce quantitative measures for model similarity, and discuss potential applications of combined similarity measures. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on a combination of different model aspects. The six aspects that we define as potentially relevant for similarity are underlying encoding, references to biological entities, quantitative behaviour, qualitative behaviour, mathematical equations and parameters and network structure. We argue that future similarity measures will benefit from combining these model aspects in flexible, problem-specific ways to mimic users' intuition about model similarity, and to support complex model searches in databases. © The Author 2016. Published by Oxford University Press.

  8. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  9. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  10. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    Science.gov (United States)

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine

  11. On self-similar Tolman models

    International Nuclear Information System (INIS)

    Maharaj, S.D.

    1988-01-01

    The self-similar spherically symmetric solutions of the Einstein field equation for the case of dust are identified. These form a subclass of the Tolman models. These self-similar models contain the solution recently presented by Chi [J. Math. Phys. 28, 1539 (1987)], thereby refuting the claim of having found a new solution to the Einstein field equations

  12. Face and body recognition show similar improvement during childhood.

    Science.gov (United States)

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Lagrangian-similarity diffusion-deposition model

    International Nuclear Information System (INIS)

    Horst, T.W.

    1979-01-01

    A Lagrangian-similarity diffusion model has been incorporated into the surface-depletion deposition model. This model predicts vertical concentration profiles far downwind of the source that agree with those of a one-dimensional gradient-transfer model

  14. Improved cosine similarity measures of simplified neutrosophic setsfor medical diagnoses

    OpenAIRE

    Jun Ye

    2014-01-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematicaltool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophicsets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based oncosine function, including single valued neutrosophic cosine similarity measures and interval neutro-sophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced...

  15. The predictive model on the user reaction time using the information similarity

    International Nuclear Information System (INIS)

    Lee, Sung Jin; Heo, Gyun Young; Chang, Soon Heung

    2005-01-01

    Human performance is frequently degraded because people forget. Memory is one of brain processes that are important when trying to understand how people process information. Although a large number of studies have been made on the human performance, little is known about the similarity effect in human performance. The purpose of this paper is to propose and validate the quantitative and predictive model on the human response time in the user interface with the concept of similarity. However, it is not easy to explain the human performance with only similarity or information amount. We are confronted by two difficulties: making the quantitative model on the human response time with the similarity and validating the proposed model by experimental work. We made the quantitative model based on the Hick's law and the law of practice. In addition, we validated the model with various experimental conditions by measuring participants' response time in the environment of computer-based display. Experimental results reveal that the human performance is improved by the user interface's similarity. We think that the proposed model is useful for the user interface design and evaluation phases

  16. A Novel Hybrid Similarity Calculation Model

    Directory of Open Access Journals (Sweden)

    Xiaoping Fan

    2017-01-01

    Full Text Available This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.

  17. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    Energy Technology Data Exchange (ETDEWEB)

    Ovacik, Meric A. [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Androulakis, Ioannis P., E-mail: yannis@rci.rutgers.edu [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Biomedical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States)

    2013-09-15

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy.

  18. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    International Nuclear Information System (INIS)

    Ovacik, Meric A.; Androulakis, Ioannis P.

    2013-01-01

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy

  19. A little similarity goes a long way: the effects of peripheral but self-revealing similarities on improving and sustaining interracial relationships.

    Science.gov (United States)

    West, Tessa V; Magee, Joe C; Gordon, Sarah H; Gullett, Lindy

    2014-07-01

    Integrating theory on close relationships and intergroup relations, we construct a manipulation of similarity that we demonstrate can improve interracial interactions across different settings. We find that manipulating perceptions of similarity on self-revealing attributes that are peripheral to the interaction improves interactions in cross-race dyads and racially diverse task groups. In a getting-acquainted context, we demonstrate that the belief that one's different-race partner is similar to oneself on self-revealing, peripheral attributes leads to less anticipatory anxiety than the belief that one's partner is similar on peripheral, nonself-revealing attributes. In another dyadic context, we explore the range of benefits that perceptions of peripheral, self-revealing similarity can bring to different-race interaction partners and find (a) less anxiety during interaction, (b) greater interest in sustained contact with one's partner, and (c) stronger accuracy in perceptions of one's partners' relationship intentions. By contrast, participants in same-race interactions were largely unaffected by these manipulations of perceived similarity. Our final experiment shows that among small task groups composed of racially diverse individuals, those whose members perceive peripheral, self-revealing similarity perform superior to those who perceive dissimilarity. Implications for using this approach to improve interracial interactions across different goal-driven contexts are discussed.

  20. Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor

    Directory of Open Access Journals (Sweden)

    Ye Li

    2017-01-01

    Full Text Available Recommended system is beneficial to e-commerce sites, which provides customers with product information and recommendations; the recommendation system is currently widely used in many fields. In an era of information explosion, the key challenges of the recommender system is to obtain valid information from the tremendous amount of information and produce high quality recommendations. However, when facing the large mount of information, the traditional collaborative filtering algorithm usually obtains a high degree of sparseness, which ultimately lead to low accuracy recommendations. To tackle this issue, we propose a novel algorithm named Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor, which is based on the trust model and is combined with the user similarity. The novel algorithm takes into account the degree of interest overlap between the two users and results in a superior performance to the recommendation based on Trust Model in criteria of Precision, Recall, Diversity and Coverage. Additionally, the proposed model can effectively improve the efficiency of collaborative filtering algorithm and achieve high performance.

  1. Similar words analysis based on POS-CBOW language model

    Directory of Open Access Journals (Sweden)

    Dongru RUAN

    2015-10-01

    Full Text Available Similar words analysis is one of the important aspects in the field of natural language processing, and it has important research and application values in text classification, machine translation and information recommendation. Focusing on the features of Sina Weibo's short text, this paper presents a language model named as POS-CBOW, which is a kind of continuous bag-of-words language model with the filtering layer and part-of-speech tagging layer. The proposed approach can adjust the word vectors' similarity according to the cosine similarity and the word vectors' part-of-speech metrics. It can also filter those similar words set on the base of the statistical analysis model. The experimental result shows that the similar words analysis algorithm based on the proposed POS-CBOW language model is better than that based on the traditional CBOW language model.

  2. Similarity and Modeling in Science and Engineering

    CERN Document Server

    Kuneš, Josef

    2012-01-01

    The present text sets itself in relief to other titles on the subject in that it addresses the means and methodologies versus a narrow specific-task oriented approach. Concepts and their developments which evolved to meet the changing needs of applications are addressed. This approach provides the reader with a general tool-box to apply to their specific needs. Two important tools are presented: dimensional analysis and the similarity analysis methods. The fundamental point of view, enabling one to sort all models, is that of information flux between a model and an original expressed by the similarity and abstraction. Each chapter includes original examples and ap-plications. In this respect, the models can be divided into several groups. The following models are dealt with separately by chapter; mathematical and physical models, physical analogues, deterministic, stochastic, and cybernetic computer models. The mathematical models are divided into asymptotic and phenomenological models. The phenomenological m...

  3. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    Science.gov (United States)

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    data, wavelet transform and term frequency-inverse document frequency methods were employed to extract predictors. Selecting predictors with potential to highlight special cases and defining new patient similarity metrics were among the gaps identified in the existing literature that provide starting points for future work. Patient status prediction models based on patient similarity and health data offer exciting potential for personalizing and ultimately improving health care, leading to better patient outcomes. PMID:28258046

  4. Self-Similar Symmetry Model and Cosmic Microwave Background

    Directory of Open Access Journals (Sweden)

    Tomohide eSonoda

    2016-05-01

    Full Text Available In this paper, we present the self-similar symmetry (SSS model that describes the hierarchical structure of the universe. The model is based on the concept of self-similarity, which explains the symmetry of the cosmic microwave background (CMB. The approximate length and time scales of the six hierarchies of the universe---grand unification, electroweak unification, the atom, the pulsar, the solar system, and the galactic system---are derived from the SSS model. In addition, the model implies that the electron mass and gravitational constant could vary with the CMB radiation temperature.

  5. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  6. Face Recognition Performance Improvement using a Similarity Score of Feature Vectors based on Probabilistic Histograms

    Directory of Open Access Journals (Sweden)

    SRIKOTE, G.

    2016-08-01

    Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.

  7. Similarity and accuracy of mental models formed during nursing handovers: A concept mapping approach.

    Science.gov (United States)

    Drach-Zahavy, Anat; Broyer, Chaya; Dagan, Efrat

    2017-09-01

    Shared mental models are crucial for constructing mutual understanding of the patient's condition during a clinical handover. Yet, scant research, if any, has empirically explored mental models of the parties involved in a clinical handover. This study aimed to examine the similarities among mental models of incoming and outgoing nurses, and to test their accuracy by comparing them with mental models of expert nurses. A cross-sectional study, exploring nurses' mental models via the concept mapping technique. 40 clinical handovers. Data were collected via concept mapping of the incoming, outgoing, and expert nurses' mental models (total of 120 concept maps). Similarity and accuracy for concepts and associations indexes were calculated to compare the different maps. About one fifth of the concepts emerged in both outgoing and incoming nurses' concept maps (concept similarity=23%±10.6). Concept accuracy indexes were 35%±18.8 for incoming and 62%±19.6 for outgoing nurses' maps. Although incoming nurses absorbed fewer number of concepts and associations (23% and 12%, respectively), they partially closed the gap (35% and 22%, respectively) relative to expert nurses' maps. The correlations between concept similarities, and incoming as well as outgoing nurses' concept accuracy, were significant (r=0.43, p<0.01; r=0.68 p<0.01, respectively). Finally, in 90% of the maps, outgoing nurses added information concerning the processes enacted during the shift, beyond the expert nurses' gold standard. Two seemingly contradicting processes in the handover were identified. "Information loss", captured by the low similarity indexes among the mental models of incoming and outgoing nurses; and "information restoration", based on accuracy measures indexes among the mental models of the incoming nurses. Based on mental model theory, we propose possible explanations for these processes and derive implications for how to improve a clinical handover. Copyright © 2017 Elsevier Ltd. All

  8. Vere-Jones' self-similar branching model

    International Nuclear Information System (INIS)

    Saichev, A.; Sornette, D.

    2005-01-01

    Motivated by its potential application to earthquake statistics as well as for its intrinsic interest in the theory of branching processes, we study the exactly self-similar branching process introduced recently by Vere-Jones. This model extends the ETAS class of conditional self-excited branching point-processes of triggered seismicity by removing the problematic need for a minimum (as well as maximum) earthquake size. To make the theory convergent without the need for the usual ultraviolet and infrared cutoffs, the distribution of magnitudes m ' of daughters of first-generation of a mother of magnitude m has two branches m ' ' >m with exponent β+d, where β and d are two positive parameters. We investigate the condition and nature of the subcritical, critical, and supercritical regime in this and in an extended version interpolating smoothly between several models. We predict that the distribution of magnitudes of events triggered by a mother of magnitude m over all generations has also two branches m ' ' >m with exponent β+h, with h=d√(1-s), where s is the fraction of triggered events. This corresponds to a renormalization of the exponent d into h by the hierarchy of successive generations of triggered events. For a significant part of the parameter space, the distribution of magnitudes over a full catalog summed over an average steady flow of spontaneous sources (immigrants) reproduces the distribution of the spontaneous sources with a single branch and is blind to the exponents β,d of the distribution of triggered events. Since the distribution of earthquake magnitudes is usually obtained with catalogs including many sequences, we conclude that the two branches of the distribution of aftershocks are not directly observable and the model is compatible with real seismic catalogs. In summary, the exactly self-similar Vere-Jones model provides an attractive new approach to model triggered seismicity, which alleviates delicate questions on the role of

  9. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert

    2015-02-19

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  10. Perception of similarity: a model for social network dynamics

    International Nuclear Information System (INIS)

    Javarone, Marco Alberto; Armano, Giuliano

    2013-01-01

    Some properties of social networks (e.g., the mixing patterns and the community structure) appear deeply influenced by the individual perception of people. In this work we map behaviors by considering similarity and popularity of people, also assuming that each person has his/her proper perception and interpretation of similarity. Although investigated in different ways (depending on the specific scientific framework), from a computational perspective similarity is typically calculated as a distance measure. In accordance with this view, to represent social network dynamics we developed an agent-based model on top of a hyperbolic space on which individual distance measures are calculated. Simulations, performed in accordance with the proposed model, generate small-world networks that exhibit a community structure. We deem this model to be valuable for analyzing the relevant properties of real social networks. (paper)

  11. Self-similar two-particle separation model

    DEFF Research Database (Denmark)

    Lüthi, Beat; Berg, Jacob; Ott, Søren

    2007-01-01

    .g.; in the inertial range as epsilon−1/3r2/3. Particle separation is modeled as a Gaussian process without invoking information of Eulerian acceleration statistics or of precise shapes of Eulerian velocity distribution functions. The time scale is a function of S2(r) and thus of the Lagrangian evolving separation......We present a new stochastic model for relative two-particle separation in turbulence. Inspired by material line stretching, we suggest that a similar process also occurs beyond the viscous range, with time scaling according to the longitudinal second-order structure function S2(r), e....... The model predictions agree with numerical and experimental results for various initial particle separations. We present model results for fixed time and fixed scale statistics. We find that for the Richardson-Obukhov law, i.e., =gepsilont3, to hold and to also be observed in experiments, high Reynolds...

  12. Continuous Improvement and Collaborative Improvement: Similarities and Differences

    NARCIS (Netherlands)

    Middel, H.G.A.; Boer, Harm; Fisscher, O.A.M.

    2006-01-01

    A substantial body of theoretical and practical knowledge has been developed on continuous improvement. However, there is still a considerable lack of empirically grounded contributions and theories on collaborative improvement, that is, continuous improvement in an inter-organizational setting. The

  13. Simulation and similarity using models to understand the world

    CERN Document Server

    Weisberg, Michael

    2013-01-01

    In the 1950s, John Reber convinced many Californians that the best way to solve the state's water shortage problem was to dam up the San Francisco Bay. Against massive political pressure, Reber's opponents persuaded lawmakers that doing so would lead to disaster. They did this not by empirical measurement alone, but also through the construction of a model. Simulation and Similarity explains why this was a good strategy while simultaneously providing an account of modeling and idealization in modern scientific practice. Michael Weisberg focuses on concrete, mathematical, and computational models in his consideration of the nature of models, the practice of modeling, and nature of the relationship between models and real-world phenomena. In addition to a careful analysis of physical, computational, and mathematical models, Simulation and Similarity offers a novel account of the model/world relationship. Breaking with the dominant tradition, which favors the analysis of this relation through logical notions suc...

  14. A study on the quantitative model of human response time using the amount and the similarity of information

    International Nuclear Information System (INIS)

    Lee, Sung Jin

    2006-02-01

    The mental capacity to retain or recall information, or memory is related to human performance during processing of information. Although a large number of studies have been carried out on human performance, little is known about the similarity effect. The purpose of this study was to propose and validate a quantitative and predictive model on human response time in the user interface with the basic concepts of information amount, similarity and degree of practice. It was difficult to explain human performance by only similarity or information amount. There were two difficulties: constructing a quantitative model on human response time and validating the proposed model by experimental work. A quantitative model based on the Hick's law, the law of practice and similarity theory was developed. The model was validated under various experimental conditions by measuring the participants' response time in the environment of a computer-based display. Human performance was improved by degree of similarity and practice in the user interface. Also we found the age-related human performance which was degraded as he or she was more elder. The proposed model may be useful for training operators who will handle some interfaces and predicting human performance by changing system design

  15. Modeling Timbre Similarity of Short Music Clips.

    Science.gov (United States)

    Siedenburg, Kai; Müllensiefen, Daniel

    2017-01-01

    There is evidence from a number of recent studies that most listeners are able to extract information related to song identity, emotion, or genre from music excerpts with durations in the range of tenths of seconds. Because of these very short durations, timbre as a multifaceted auditory attribute appears as a plausible candidate for the type of features that listeners make use of when processing short music excerpts. However, the importance of timbre in listening tasks that involve short excerpts has not yet been demonstrated empirically. Hence, the goal of this study was to develop a method that allows to explore to what degree similarity judgments of short music clips can be modeled with low-level acoustic features related to timbre. We utilized the similarity data from two large samples of participants: Sample I was obtained via an online survey, used 16 clips of 400 ms length, and contained responses of 137,339 participants. Sample II was collected in a lab environment, used 16 clips of 800 ms length, and contained responses from 648 participants. Our model used two sets of audio features which included commonly used timbre descriptors and the well-known Mel-frequency cepstral coefficients as well as their temporal derivates. In order to predict pairwise similarities, the resulting distances between clips in terms of their audio features were used as predictor variables with partial least-squares regression. We found that a sparse selection of three to seven features from both descriptor sets-mainly encoding the coarse shape of the spectrum as well as spectrotemporal variability-best predicted similarities across the two sets of sounds. Notably, the inclusion of non-acoustic predictors of musical genre and record release date allowed much better generalization performance and explained up to 50% of shared variance ( R 2 ) between observations and model predictions. Overall, the results of this study empirically demonstrate that both acoustic features related

  16. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  17. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  18. APPLICABILITY OF SIMILARITY CONDITIONS TO ANALOGUE MODELLING OF TECTONIC STRUCTURES

    Directory of Open Access Journals (Sweden)

    Mikhail A. Goncharov

    2010-01-01

    Full Text Available The publication is aimed at comparing concepts of V.V. Belousov and M.V. Gzovsky, outstanding researchers who established fundamentals of tectonophysics in Russia, specifically similarity conditions in application to tectonophysical modeling. Quotations from their publications illustrate differences in their views. In this respect, we can reckon V.V. Belousov as a «realist» as he supported «the liberal point of view» [Methods of modelling…, 1988, p. 21–22], whereas M.V. Gzovsky can be regarded as an «idealist» as he believed that similarity conditions should be mandatorily applied to ensure correctness of physical modeling of tectonic deformations and structures [Gzovsky, 1975, pp. 88 and 94].Objectives of the present publication are (1 to be another reminder about desirability of compliance with similarity conditions in experimental tectonics; (2 to point out difficulties in ensuring such compliance; (3 to give examples which bring out the fact that similarity conditions are often met per se, i.e. automatically observed; (4 to show that modeling can be simplified in some cases without compromising quantitative estimations of parameters of structure formation.(1 Physical modelling of tectonic deformations and structures should be conducted, if possible, in compliance with conditions of geometric and physical similarity between experimental models and corresponding natural objects. In any case, a researcher should have a clear vision of conditions applicable to each particular experiment.(2 Application of similarity conditions is often challenging due to unavoidable difficulties caused by the following: a Imperfection of experimental equipment and technologies (Fig. 1 to 3; b uncertainties in estimating parameters of formation of natural structures, including main ones: structure size (Fig. 4, time of formation (Fig. 5, deformation properties of the medium wherein such structures are formed, including, first of all, viscosity (Fig. 6

  19. An optimization model for improving highway safety

    Directory of Open Access Journals (Sweden)

    Promothes Saha

    2016-12-01

    Full Text Available This paper developed a traffic safety management system (TSMS for improving safety on county paved roads in Wyoming. TSMS is a strategic and systematic process to improve safety of roadway network. When funding is limited, it is important to identify the best combination of safety improvement projects to provide the most benefits to society in terms of crash reduction. The factors included in the proposed optimization model are annual safety budget, roadway inventory, roadway functional classification, historical crashes, safety improvement countermeasures, cost and crash reduction factors (CRFs associated with safety improvement countermeasures, and average daily traffics (ADTs. This paper demonstrated how the proposed model can identify the best combination of safety improvement projects to maximize the safety benefits in terms of reducing overall crash frequency. Although the proposed methodology was implemented on the county paved road network of Wyoming, it could be easily modified for potential implementation on the Wyoming state highway system. Other states can also benefit by implementing a similar program within their jurisdictions.

  20. Efficient Adoption and Assessment of Multiple Process Improvement Reference Models

    Directory of Open Access Journals (Sweden)

    Simona Jeners

    2013-06-01

    Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.

  1. Numerical study of similarity in prototype and model pumped turbines

    International Nuclear Information System (INIS)

    Li, Z J; Wang, Z W; Bi, H L

    2014-01-01

    Similarity study of prototype and model pumped turbines are performed by numerical simulation and the partial discharge case is analysed in detail. It is found out that in the RSI (rotor-stator interaction) region where the flow is convectively accelerated with minor flow separation, a high level of similarity in flow patterns and pressure fluctuation appear with relative pressure fluctuation amplitude of model turbine slightly higher than that of prototype turbine. As for the condition in the runner where the flow is convectively accelerated with severe separation, similarity fades substantially due to different topology of flow separation and vortex formation brought by distinctive Reynolds numbers of the two turbines. In the draft tube where the flow is diffusively decelerated, similarity becomes debilitated owing to different vortex rope formation impacted by Reynolds number. It is noted that the pressure fluctuation amplitude and characteristic frequency of model turbine are larger than those of prototype turbine. The differences in pressure fluctuation characteristics are discussed theoretically through dimensionless Navier-Stokes equation. The above conclusions are all made based on simulation without regard to the penstock response and resonance

  2. [Similarity system theory to evaluate similarity of chromatographic fingerprints of traditional Chinese medicine].

    Science.gov (United States)

    Liu, Yongsuo; Meng, Qinghua; Jiang, Shumin; Hu, Yuzhu

    2005-03-01

    The similarity evaluation of the fingerprints is one of the most important problems in the quality control of the traditional Chinese medicine (TCM). Similarity measures used to evaluate the similarity of the common peaks in the chromatogram of TCM have been discussed. Comparative studies were carried out among correlation coefficient, cosine of the angle and an improved extent similarity method using simulated data and experimental data. Correlation coefficient and cosine of the angle are not sensitive to the differences of the data set. They are still not sensitive to the differences of the data even after normalization. According to the similarity system theory, an improved extent similarity method was proposed. The improved extent similarity is more sensitive to the differences of the data sets than correlation coefficient and cosine of the angle. And the character of the data sets needs not to be changed compared with log-transformation. The improved extent similarity can be used to evaluate the similarity of the chromatographic fingerprints of TCM.

  3. Diffusion-like recommendation with enhanced similarity of objects

    Science.gov (United States)

    An, Ya-Hui; Dong, Qiang; Sun, Chong-Jing; Nie, Da-Cheng; Fu, Yan

    2016-11-01

    In the last decade, diversity and accuracy have been regarded as two important measures in evaluating a recommendation model. However, a clear concern is that a model focusing excessively on one measure will put the other one at risk, thus it is not easy to greatly improve diversity and accuracy simultaneously. In this paper, we propose to enhance the Resource-Allocation (RA) similarity in resource transfer equations of diffusion-like models, by giving a tunable exponent to the RA similarity, and traversing the value of this exponent to achieve the optimal recommendation results. In this way, we can increase the recommendation scores (allocated resource) of many unpopular objects. Experiments on three benchmark data sets, MovieLens, Netflix and RateYourMusic show that the modified models can yield remarkable performance improvement compared with the original ones.

  4. Brazilian N2 laser similar to imported models

    International Nuclear Information System (INIS)

    Santos, P.A.M. dos; Tavares Junior, A.D.; Silva Reis, H. da; Tagliaferri, A.A.; Massone, C.A.

    1981-09-01

    The development of a high power N 2 Laser, similar to imported models but built enterely with Brazilian materials is described. The prototype shows pulse repetitivity that varies from 1 to 50 per second and has a peak power of 500 kW. (Author) [pt

  5. A process improvement model for software verification and validation

    Science.gov (United States)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  6. The continuous similarity model of bulk soil-water evaporation

    Science.gov (United States)

    Clapp, R. B.

    1983-01-01

    The continuous similarity model of evaporation is described. In it, evaporation is conceptualized as a two stage process. For an initially moist soil, evaporation is first climate limited, but later it becomes soil limited. During the latter stage, the evaporation rate is termed evaporability, and mathematically it is inversely proportional to the evaporation deficit. A functional approximation of the moisture distribution within the soil column is also included in the model. The model was tested using data from four experiments conducted near Phoenix, Arizona; and there was excellent agreement between the simulated and observed evaporation. The model also predicted the time of transition to the soil limited stage reasonably well. For one of the experiments, a third stage of evaporation, when vapor diffusion predominates, was observed. The occurrence of this stage was related to the decrease in moisture at the surface of the soil. The continuous similarity model does not account for vapor flow. The results show that climate, through the potential evaporation rate, has a strong influence on the time of transition to the soil limited stage. After this transition, however, bulk evaporation is independent of climate until the effects of vapor flow within the soil predominate.

  7. Improving Zernike moments comparison for optimal similarity and rotation angle retrieval.

    Science.gov (United States)

    Revaud, Jérôme; Lavoué, Guillaume; Baskurt, Atilla

    2009-04-01

    Zernike moments constitute a powerful shape descriptor in terms of robustness and description capability. However the classical way of comparing two Zernike descriptors only takes into account the magnitude of the moments and loses the phase information. The novelty of our approach is to take advantage of the phase information in the comparison process while still preserving the invariance to rotation. This new Zernike comparator provides a more accurate similarity measure together with the optimal rotation angle between the patterns, while keeping the same complexity as the classical approach. This angle information is particularly of interest for many applications, including 3D scene understanding through images. Experiments demonstrate that our comparator outperforms the classical one in terms of similarity measure. In particular the robustness of the retrieval against noise and geometric deformation is greatly improved. Moreover, the rotation angle estimation is also more accurate than state-of-the-art algorithms.

  8. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-12-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  9. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-01-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  10. Spatiao – Temporal Evaluation and Comparison of MM5 Model using Similarity Algorithm

    Directory of Open Access Journals (Sweden)

    N. Siabi

    2016-02-01

    Full Text Available Introduction temporal and spatial change of meteorological and environmental variables is very important. These changes can be predicted by numerical prediction models over time and in different locations and can be provided as spatial zoning maps with interpolation methods such as geostatistics (16, 6. But these maps are comparable to each other as visual, qualitative and univariate for a limited number of maps (15. To resolve this problem the similarity algorithm is used. This algorithm is a simultaneous comparison method to a large number of data (18. Numerical prediction models such as MM5 were used in different studies (10, 22, and 23. But a little research is done to compare the spatio-temporal similarity of the models with real data quantitatively. The purpose of this paper is to integrate geostatistical techniques with similarity algorithm to study the spatial and temporal MM5 model predicted results with real data. Materials and Methods The study area is north east of Iran. 55 to 61 degrees of longitude and latitude is 30 to 38 degrees. Monthly and annual temperature and precipitation actual data for the period of 1990-2010 was received from the Meteorological Agency and Department of Energy. MM5 Model Data, with a spatial resolution 0.5 × 0.5 degree were downloaded from the NASA website (5. GS+ and ArcGis software were used to produce each variable map. We used multivariate methods co-kriging and kriging with an external drift by applying topography and height as a secondary variable via implementing Digital Elevation Model. (6,12,14. Then the standardize and similarity algorithms (9,11 was applied by programming in MATLAB software to each map grid point. The spatial and temporal similarities between data collections and model results were obtained by F values. These values are between 0 and 0.5 where the value below 0.2 indicates good similarity and above 0.5 shows very poor similarity. The results were plotted on maps by MATLAB

  11. Morphological similarities between DBM and a microeconomic model of sprawl

    Science.gov (United States)

    Caruso, Geoffrey; Vuidel, Gilles; Cavailhès, Jean; Frankhauser, Pierre; Peeters, Dominique; Thomas, Isabelle

    2011-03-01

    We present a model that simulates the growth of a metropolitan area on a 2D lattice. The model is dynamic and based on microeconomics. Households show preferences for nearby open spaces and neighbourhood density. They compete on the land market. They travel along a road network to access the CBD. A planner ensures the connectedness and maintenance of the road network. The spatial pattern of houses, green spaces and road network self-organises, emerging from agents individualistic decisions. We perform several simulations and vary residential preferences. Our results show morphologies and transition phases that are similar to Dieletric Breakdown Models (DBM). Such similarities were observed earlier by other authors, but we show here that it can be deducted from the functioning of the land market and thus explicitly connected to urban economic theory.

  12. Quality assessment of protein model-structures based on structural and functional similarities.

    Science.gov (United States)

    Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata

    2012-09-21

    Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and

  13. A new k-epsilon model consistent with Monin-Obukhov similarity theory

    DEFF Research Database (Denmark)

    van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.

    2017-01-01

    A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...

  14. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert; Gruenberger, Michael; Gkoutos, Georgios V; Schofield, Paul N

    2015-01-01

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions

  15. Bianchi VI0 and III models: self-similar approach

    International Nuclear Information System (INIS)

    Belinchon, Jose Antonio

    2009-01-01

    We study several cosmological models with Bianchi VI 0 and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and Λ. As in other studied models we find that the behaviour of G and Λ are related. If G behaves as a growing time function then Λ is a positive decreasing time function but if G is decreasing then Λ 0 is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  16. Improved modelling of independent parton hadronization

    International Nuclear Information System (INIS)

    Biddulph, P.; Thompson, G.

    1989-01-01

    A modification is proposed to current versions of the Field-Feynman ansatz for the hadronization of a quark in Monte Carlo models of QCD interactions. This faster-running algorithm has no more parameters and imposes a better degree of energy conservation. It results in naturally introducing a limitation of the transverse momentum distribution, similar to the experimentally observed ''seagull'' effect. There is now a much improved conservation of quantum numbers between the original parton and resultant hadrons, and the momentum of the emitted parton is better preserved in the summed momentum vectors of the final state particles. (orig.)

  17. A self-similar magnetohydrodynamic model for ball lightnings

    International Nuclear Information System (INIS)

    Tsui, K. H.

    2006-01-01

    Ball lightning is modeled by magnetohydrodynamic (MHD) equations in two-dimensional spherical geometry with azimuthal symmetry. Dynamic evolutions in the radial direction are described by the self-similar evolution function y(t). The plasma pressure, mass density, and magnetic fields are solved in terms of the radial label η. This model gives spherical MHD plasmoids with axisymmetric force-free magnetic field, and spherically symmetric plasma pressure and mass density, which self-consistently determine the polytropic index γ. The spatially oscillating nature of the radial and meridional field structures indicate embedded regions of closed field lines. These regions are named secondary plasmoids, whereas the overall self-similar spherical structure is named the primary plasmoid. According to this model, the time evolution function allows the primary plasmoid expand outward in two modes. The corresponding ejection of the embedded secondary plasmoids results in ball lightning offering an answer as how they come into being. The first is an accelerated expanding mode. This mode appears to fit plasmoids ejected from thundercloud tops with acceleration to ionosphere seen in high altitude atmospheric observations of sprites and blue jets. It also appears to account for midair high-speed ball lightning overtaking airplanes, and ground level high-speed energetic ball lightning. The second is a decelerated expanding mode, and it appears to be compatible to slowly moving ball lightning seen near ground level. The inverse of this second mode corresponds to an accelerated inward collapse, which could bring ball lightning to an end sometimes with a cracking sound

  18. Models for discrete-time self-similar vector processes with application to network traffic

    Science.gov (United States)

    Lee, Seungsin; Rao, Raghuveer M.; Narasimha, Rajesh

    2003-07-01

    The paper defines self-similarity for vector processes by employing the discrete-time continuous-dilation operation which has successfully been used previously by the authors to define 1-D discrete-time stochastic self-similar processes. To define self-similarity of vector processes, it is required to consider the cross-correlation functions between different 1-D processes as well as the autocorrelation function of each constituent 1-D process in it. System models to synthesize self-similar vector processes are constructed based on the definition. With these systems, it is possible to generate self-similar vector processes from white noise inputs. An important aspect of the proposed models is that they can be used to synthesize various types of self-similar vector processes by choosing proper parameters. Additionally, the paper presents evidence of vector self-similarity in two-channel wireless LAN data and applies the aforementioned systems to simulate the corresponding network traffic traces.

  19. Experimental Study of Dowel Bar Alternatives Based on Similarity Model Test

    Directory of Open Access Journals (Sweden)

    Chichun Hu

    2017-01-01

    Full Text Available In this study, a small-scaled accelerated loading test based on similarity theory and Accelerated Pavement Analyzer was developed to evaluate dowel bars with different materials and cross-sections. Jointed concrete specimen consisting of one dowel was designed as scaled model for the test, and each specimen was subjected to 864 thousand loading cycles. Deflections between jointed slabs were measured with dial indicators, and strains of the dowel bars were monitored with strain gauges. The load transfer efficiency, differential deflection, and dowel-concrete bearing stress for each case were calculated from these measurements. The test results indicated that the effect of the dowel modulus on load transfer efficiency can be characterized based on the similarity model test developed in the study. Moreover, round steel dowel was found to have similar performance to larger FRP dowel, and elliptical dowel can be preferentially considered in practice.

  20. MAC/FAC: A Model of Similarity-Based Retrieval

    Science.gov (United States)

    1994-10-01

    Grapes (0.28) 327 Sour Grapes, analog The Taming of the Shrew (0.22), Merry Wives 251 (0.18), S[11 stories], Sour Grapes (-0.19) Sour Grapes, literal... The Institute for the 0 1 Learning Sciences Northwestern University CD• 00 MAC/FAC: A MODEL OF SIMILARITY-BASED RETRIEVAL Kenneth D. Forbus Dedre...Gentner Keith Law Technical Report #59 • October 1994 94-35188 wit Establisthed in 1989 with the support of Andersen Consulting Form Approved REPORT

  1. Similarity Assessment of Land Surface Model Outputs in the North American Land Data Assimilation System

    Science.gov (United States)

    Kumar, Sujay V.; Wang, Shugong; Mocko, David M.; Peters-Lidard, Christa D.; Xia, Youlong

    2017-11-01

    Multimodel ensembles are often used to produce ensemble mean estimates that tend to have increased simulation skill over any individual model output. If multimodel outputs are too similar, an individual LSM would add little additional information to the multimodel ensemble, whereas if the models are too dissimilar, it may be indicative of systematic errors in their formulations or configurations. The article presents a formal similarity assessment of the North American Land Data Assimilation System (NLDAS) multimodel ensemble outputs to assess their utility to the ensemble, using a confirmatory factor analysis. Outputs from four NLDAS Phase 2 models currently running in operations at NOAA/NCEP and four new/upgraded models that are under consideration for the next phase of NLDAS are employed in this study. The results show that the runoff estimates from the LSMs were most dissimilar whereas the models showed greater similarity for root zone soil moisture, snow water equivalent, and terrestrial water storage. Generally, the NLDAS operational models showed weaker association with the common factor of the ensemble and the newer versions of the LSMs showed stronger association with the common factor, with the model similarity increasing at longer time scales. Trade-offs between the similarity metrics and accuracy measures indicated that the NLDAS operational models demonstrate a larger span in the similarity-accuracy space compared to the new LSMs. The results of the article indicate that simultaneous consideration of model similarity and accuracy at the relevant time scales is necessary in the development of multimodel ensemble.

  2. A Measure of Similarity Between Trajectories of Vessels

    Directory of Open Access Journals (Sweden)

    Le QI

    2016-03-01

    Full Text Available The measurement of similarity between trajectories of vessels is one of the kernel problems that must be addressed to promote the development of maritime intelligent traffic system (ITS. In this study, a new model of trajectory similarity measurement was established to improve the data processing efficiency in dynamic application and to reflect actual sailing behaviors of vessels. In this model, a feature point detection algorithm was proposed to extract feature points, reduce data storage space and save computational resources. A new synthesized distance algorithm was also created to measure the similarity between trajectories by using the extracted feature points. An experiment was conducted to measure the similarity between the real trajectories of vessels. The growth of these trajectories required measurements to be conducted under different voyages. The results show that the similarity measurement between the vessel trajectories is efficient and correct. Comparison of the synthesized distance with the sailing behaviors of vessels proves that results are consistent with actual situations. The experiment results demonstrate the promising application of the proposed model in studying vessel traffic and in supplying reliable data for the development of maritime ITS.

  3. Query-dependent banding (QDB for faster RNA similarity searches.

    Directory of Open Access Journals (Sweden)

    Eric P Nawrocki

    2007-03-01

    Full Text Available When searching sequence databases for RNAs, it is desirable to score both primary sequence and RNA secondary structure similarity. Covariance models (CMs are probabilistic models well-suited for RNA similarity search applications. However, the computational complexity of CM dynamic programming alignment algorithms has limited their practical application. Here we describe an acceleration method called query-dependent banding (QDB, which uses the probabilistic query CM to precalculate regions of the dynamic programming lattice that have negligible probability, independently of the target database. We have implemented QDB in the freely available Infernal software package. QDB reduces the average case time complexity of CM alignment from LN(2.4 to LN(1.3 for a query RNA of N residues and a target database of L residues, resulting in a 4-fold speedup for typical RNA queries. Combined with other improvements to Infernal, including informative mixture Dirichlet priors on model parameters, benchmarks also show increased sensitivity and specificity resulting from improved parameterization.

  4. Embedding Term Similarity and Inverse Document Frequency into a Logical Model of Information Retrieval.

    Science.gov (United States)

    Losada, David E.; Barreiro, Alvaro

    2003-01-01

    Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

  5. Analysis of metal forming processes by using physical modeling and new plastic similarity condition

    International Nuclear Information System (INIS)

    Gronostajski, Z.; Hawryluk, M.

    2007-01-01

    In recent years many advances have been made in numerical methods, for linear and non-linear problems. However the success of them depends very much on the correctness of the problem formulation and the availability of the input data. Validity of the theoretical results can be verified by an experiment using the real or soft materials. An essential reduction of time and costs of the experiment can be obtained by using soft materials, which behaves in a way analogous to that of real metal during deformation. The advantages of using of the soft materials are closely connected with flow stress 500 to 1000 times lower than real materials. The accuracy of physical modeling depend on the similarity conditions between physical model and real process. The most important similarity conditions are materials similarity in the range of plastic and elastic deformation, geometrical, frictional and thermal similarities. New original plastic similarity condition for physical modeling of metal forming processes is proposed in the paper. It bases on the mathematical description of similarity of the flow stress curves of soft materials and real ones

  6. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure

    Directory of Open Access Journals (Sweden)

    Wen Zhang

    2016-01-01

    Full Text Available Recently, LSI (Latent Semantic Indexing based on SVD (Singular Value Decomposition is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods.

  7. Bianchi VI{sub 0} and III models: self-similar approach

    Energy Technology Data Exchange (ETDEWEB)

    Belinchon, Jose Antonio, E-mail: abelcal@ciccp.e [Departamento de Fisica, ETS Arquitectura, UPM, Av. Juan de Herrera 4, Madrid 28040 (Spain)

    2009-09-07

    We study several cosmological models with Bianchi VI{sub 0} and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and LAMBDA. As in other studied models we find that the behaviour of G and LAMBDA are related. If G behaves as a growing time function then LAMBDA is a positive decreasing time function but if G is decreasing then LAMBDA{sub 0} is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  8. Model-free aftershock forecasts constructed from similar sequences in the past

    Science.gov (United States)

    van der Elst, N.; Page, M. T.

    2017-12-01

    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity

  9. Assessing the similarity of mental models of operating room team members and implications for patient safety: a prospective, replicated study.

    Science.gov (United States)

    Nakarada-Kordic, Ivana; Weller, Jennifer M; Webster, Craig S; Cumin, David; Frampton, Christopher; Boyd, Matt; Merry, Alan F

    2016-08-31

    Patient safety depends on effective teamwork. The similarity of team members' mental models - or their shared understanding-regarding clinical tasks is likely to influence the effectiveness of teamwork. Mental models have not been measured in the complex, high-acuity environment of the operating room (OR), where professionals of different backgrounds must work together to achieve the best surgical outcome for each patient. Therefore, we aimed to explore the similarity of mental models of task sequence and of responsibility for task within multidisciplinary OR teams. We developed a computer-based card sorting tool (Momento) to capture the information on mental models in 20 six-person surgical teams, each comprised of three subteams (anaesthesia, surgery, and nursing) for two simulated laparotomies. Team members sorted 20 cards depicting key tasks according to when in the procedure each task should be performed, and which subteam was primarily responsible for each task. Within each OR team and subteam, we conducted pairwise comparisons of scores to arrive at mean similarity scores for each task. Mean similarity score for task sequence was 87 % (range 57-97 %). Mean score for responsibility for task was 70 % (range = 38-100 %), but for half of the tasks was only 51 % (range = 38-69 %). Participants believed their own subteam was primarily responsible for approximately half the tasks in each procedure. We found differences in the mental models of some OR team members about responsibility for and order of certain tasks in an emergency laparotomy. Momento is a tool that could help elucidate and better align the mental models of OR team members about surgical procedures and thereby improve teamwork and outcomes for patients.

  10. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    Science.gov (United States)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  11. Similarly shaped letters evoke similar colors in grapheme-color synesthesia.

    Science.gov (United States)

    Brang, David; Rouw, Romke; Ramachandran, V S; Coulson, Seana

    2011-04-01

    Grapheme-color synesthesia is a neurological condition in which viewing numbers or letters (graphemes) results in the concurrent sensation of color. While the anatomical substrates underlying this experience are well understood, little research to date has investigated factors influencing the particular colors associated with particular graphemes or how synesthesia occurs developmentally. A recent suggestion of such an interaction has been proposed in the cascaded cross-tuning (CCT) model of synesthesia, which posits that in synesthetes connections between grapheme regions and color area V4 participate in a competitive activation process, with synesthetic colors arising during the component-stage of grapheme processing. This model more directly suggests that graphemes sharing similar component features (lines, curves, etc.) should accordingly activate more similar synesthetic colors. To test this proposal, we created and regressed synesthetic color-similarity matrices for each of 52 synesthetes against a letter-confusability matrix, an unbiased measure of visual similarity among graphemes. Results of synesthetes' grapheme-color correspondences indeed revealed that more similarly shaped graphemes corresponded with more similar synesthetic colors, with stronger effects observed in individuals with more intense synesthetic experiences (projector synesthetes). These results support the CCT model of synesthesia, implicate early perceptual mechanisms as driving factors in the elicitation of synesthetic hues, and further highlight the relationship between conceptual and perceptual factors in this phenomenon. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Lower- Versus Higher-Income Populations In The Alternative Quality Contract: Improved Quality And Similar Spending.

    Science.gov (United States)

    Song, Zirui; Rose, Sherri; Chernew, Michael E; Safran, Dana Gelb

    2017-01-01

    As population-based payment models become increasingly common, it is crucial to understand how such payment models affect health disparities. We evaluated health care quality and spending among enrollees in areas with lower versus higher socioeconomic status in Massachusetts before and after providers entered into the Alternative Quality Contract, a two-sided population-based payment model with substantial incentives tied to quality. We compared changes in process measures, outcome measures, and spending between enrollees in areas with lower and higher socioeconomic status from 2006 to 2012 (outcome measures were measured after the intervention only). Quality improved for all enrollees in the Alternative Quality Contract after their provider organizations entered the contract. Process measures improved 1.2 percentage points per year more among enrollees in areas with lower socioeconomic status than among those in areas with higher socioeconomic status. Outcome measure improvement was no different between the subgroups; neither were changes in spending. Larger or comparable improvements in quality among enrollees in areas with lower socioeconomic status suggest a potential narrowing of disparities. Strong pay-for-performance incentives within a population-based payment model could encourage providers to focus on improving quality for more disadvantaged populations. Project HOPE—The People-to-People Health Foundation, Inc.

  13. Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments

    Science.gov (United States)

    Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Storrs, Katherine R.; Mur, Marieke

    2017-01-01

    Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features

  14. Self-similar measures in multi-sector endogenous growth models

    International Nuclear Information System (INIS)

    La Torre, Davide; Marsiglio, Simone; Mendivil, Franklin; Privileggi, Fabio

    2015-01-01

    We analyze two types of stochastic discrete time multi-sector endogenous growth models, namely a basic Uzawa–Lucas (1965, 1988) model and an extended three-sector version as in La Torre and Marsiglio (2010). As in the case of sustained growth the optimal dynamics of the state variables are not stationary, we focus on the dynamics of the capital ratio variables, and we show that, through appropriate log-transformations, they can be converted into affine iterated function systems converging to an invariant distribution supported on some (possibly fractal) compact set. This proves that also the steady state of endogenous growth models—i.e., the stochastic balanced growth path equilibrium—might have a fractal nature. We also provide some sufficient conditions under which the associated self-similar measures turn out to be either singular or absolutely continuous (for the three-sector model we only consider the singularity).

  15. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  16. Automated Student Model Improvement

    Science.gov (United States)

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  17. Distance and Density Similarity Based Enhanced k-NN Classifier for Improving Fault Diagnosis Performance of Bearings

    Directory of Open Access Journals (Sweden)

    Sharif Uddin

    2016-01-01

    Full Text Available An enhanced k-nearest neighbor (k-NN classification algorithm is presented, which uses a density based similarity measure in addition to a distance based similarity measure to improve the diagnostic performance in bearing fault diagnosis. Due to its use of distance based similarity measure alone, the classification accuracy of traditional k-NN deteriorates in case of overlapping samples and outliers and is highly susceptible to the neighborhood size, k. This study addresses these limitations by proposing the use of both distance and density based measures of similarity between training and test samples. The proposed k-NN classifier is used to enhance the diagnostic performance of a bearing fault diagnosis scheme, which classifies different fault conditions based upon hybrid feature vectors extracted from acoustic emission (AE signals. Experimental results demonstrate that the proposed scheme, which uses the enhanced k-NN classifier, yields better diagnostic performance and is more robust to variations in the neighborhood size, k.

  18. LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation.

    Science.gov (United States)

    Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl

    2015-08-01

    Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average

  19. Analysis and Modeling of Time-Correlated Characteristics of Rainfall-Runoff Similarity in the Upstream Red River Basin

    Directory of Open Access Journals (Sweden)

    Xiuli Sang

    2012-01-01

    Full Text Available We constructed a similarity model (based on Euclidean distance between rainfall and runoff to study time-correlated characteristics of rainfall-runoff similar patterns in the upstream Red River Basin and presented a detailed evaluation of the time correlation of rainfall-runoff similarity. The rainfall-runoff similarity was used to determine the optimum similarity. The results showed that a time-correlated model was found to be capable of predicting the rainfall-runoff similarity in the upstream Red River Basin in a satisfactory way. Both noised and denoised time series by thresholding the wavelet coefficients were applied to verify the accuracy of model. And the corresponding optimum similar sets obtained as the equation solution conditions showed an interesting and stable trend. On the whole, the annual mean similarity presented a gradually rising trend, for quantitatively estimating comprehensive influence of climate change and of human activities on rainfall-runoff similarity.

  20. Predictive modeling of human perception subjectivity: feasibility study of mammographic lesion similarity

    Science.gov (United States)

    Xu, Songhua; Hudson, Kathleen; Bradley, Yong; Daley, Brian J.; Frederick-Dyer, Katherine; Tourassi, Georgia

    2012-02-01

    The majority of clinical content-based image retrieval (CBIR) studies disregard human perception subjectivity, aiming to duplicate the consensus expert assessment of the visual similarity on example cases. The purpose of our study is twofold: i) discern better the extent of human perception subjectivity when assessing the visual similarity of two images with similar semantic content, and (ii) explore the feasibility of personalized predictive modeling of visual similarity. We conducted a human observer study in which five observers of various expertise were shown ninety-nine triplets of mammographic masses with similar BI-RADS descriptors and were asked to select the two masses with the highest visual relevance. Pairwise agreement ranged between poor and fair among the five observers, as assessed by the kappa statistic. The observers' self-consistency rate was remarkably low, based on repeated questions where either the orientation or the presentation order of a mass was changed. Various machine learning algorithms were explored to determine whether they can predict each observer's personalized selection using textural features. Many algorithms performed with accuracy that exceeded each observer's self-consistency rate, as determined using a cross-validation scheme. This accuracy was statistically significantly higher than would be expected by chance alone (two-tailed p-value ranged between 0.001 and 0.01 for all five personalized models). The study confirmed that human perception subjectivity should be taken into account when developing CBIR-based medical applications.

  1. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    Science.gov (United States)

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  2. Model-based software process improvement

    Science.gov (United States)

    Zettervall, Brenda T.

    1994-01-01

    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  3. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  4. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Directory of Open Access Journals (Sweden)

    Jin Dai

    2014-01-01

    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  5. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  6. Prediction of the human response time with the similarity and quantity of information

    International Nuclear Information System (INIS)

    Lee, Sungjin; Heo, Gyunyoung; Chang, Soon Heung

    2006-01-01

    Memory is one of brain processes that are important when trying to understand how people process information. Although a large number of studies have been made on the human performance, little is known about the similarity effect in human performance. The purpose of this paper is to propose and validate the quantitative and predictive model on the human response time in the user interface with the concept of similarity. However, it is not easy to explain the human performance with only similarity or information amount. We are confronted by two difficulties: making the quantitative model on the human response time with the similarity and validating the proposed model by experimental work. We made the quantitative model based on the Hick's law and the law of practice. In addition, we validated the model with various experimental conditions by measuring participants' response time in the environment of computer-based display. Experimental results reveal that the human performance is improved by the user interface's similarity. We think that the proposed model is useful for the user interface design and evaluation phases

  7. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  8. Measuring transferring similarity via local information

    Science.gov (United States)

    Yin, Likang; Deng, Yong

    2018-05-01

    Recommender systems have developed along with the web science, and how to measure the similarity between users is crucial for processing collaborative filtering recommendation. Many efficient models have been proposed (i.g., the Pearson coefficient) to measure the direct correlation. However, the direct correlation measures are greatly affected by the sparsity of dataset. In other words, the direct correlation measures would present an inauthentic similarity if two users have a very few commonly selected objects. Transferring similarity overcomes this drawback by considering their common neighbors (i.e., the intermediates). Yet, the transferring similarity also has its drawback since it can only provide the interval of similarity. To break the limitations, we propose the Belief Transferring Similarity (BTS) model. The contributions of BTS model are: (1) BTS model addresses the issue of the sparsity of dataset by considering the high-order similarity. (2) BTS model transforms uncertain interval to a certain state based on fuzzy systems theory. (3) BTS model is able to combine the transferring similarity of different intermediates using information fusion method. Finally, we compare BTS models with nine different link prediction methods in nine different networks, and we also illustrate the convergence property and efficiency of the BTS model.

  9. Improving Earth/Prediction Models to Improve Network Processing

    Science.gov (United States)

    Wagner, G. S.

    2017-12-01

    The United States Atomic Energy Detection System (USAEDS) primaryseismic network consists of a relatively small number of arrays andthree-component stations. The relatively small number of stationsin the USAEDS primary network make it both necessary and feasibleto optimize both station and network processing.Station processing improvements include detector tuning effortsthat use Receiver Operator Characteristic (ROC) curves to helpjudiciously set acceptable Type 1 (false) vs. Type 2 (miss) errorrates. Other station processing improvements include the use ofempirical/historical observations and continuous background noisemeasurements to compute time-varying, maximum likelihood probabilityof detection thresholds.The USAEDS network processing software makes extensive use of theazimuth and slowness information provided by frequency-wavenumberanalysis at array sites, and polarization analysis at three-componentsites. Most of the improvements in USAEDS network processing aredue to improvements in the models used to predict azimuth, slowness,and probability of detection. Kriged travel-time, azimuth andslowness corrections-and associated uncertainties-are computedusing a ground truth database. Improvements in station processingand the use of improved models for azimuth, slowness, and probabilityof detection have led to significant improvements in USADES networkprocessing.

  10. On the scale similarity in large eddy simulation. A proposal of a new model

    International Nuclear Information System (INIS)

    Pasero, E.; Cannata, G.; Gallerano, F.

    2004-01-01

    Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The

  11. An application of superpositions of two-state Markovian sources to the modelling of self-similar behaviour

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis

    1997-01-01

    We present a modelling framework and a fitting method for modelling second order self-similar behaviour with the Markovian arrival process (MAP). The fitting method is based on fitting to the autocorrelation function of counts a second order self-similar process. It is shown that with this fittin...

  12. Improving Language Production Using Subtitled Similar Task Videos

    Science.gov (United States)

    Arslanyilmaz, Abdurrahman; Pedersen, Susan

    2010-01-01

    This study examines the effects of subtitled similar task videos on language production by nonnative speakers (NNSs) in an online task-based language learning (TBLL) environment. Ten NNS-NNS dyads collaboratively completed four communicative tasks, using an online TBLL environment specifically designed for this study and a chat tool in…

  13. Active surface model improvement by energy function optimization for 3D segmentation.

    Science.gov (United States)

    Azimifar, Zohreh; Mohaddesi, Mahsa

    2015-04-01

    This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Improvement of training set structure in fusion data cleaning using Time-Domain Global Similarity method

    International Nuclear Information System (INIS)

    Liu, J.; Lan, T.; Qin, H.

    2017-01-01

    Traditional data cleaning identifies dirty data by classifying original data sequences, which is a class-imbalanced problem since the proportion of incorrect data is much less than the proportion of correct ones for most diagnostic systems in Magnetic Confinement Fusion (MCF) devices. When using machine learning algorithms to classify diagnostic data based on class-imbalanced training set, most classifiers are biased towards the major class and show very poor classification rates on the minor class. By transforming the direct classification problem about original data sequences into a classification problem about the physical similarity between data sequences, the class-balanced effect of Time-Domain Global Similarity (TDGS) method on training set structure is investigated in this paper. Meanwhile, the impact of improved training set structure on data cleaning performance of TDGS method is demonstrated with an application example in EAST POlarimetry-INTerferometry (POINT) system.

  15. Investigating Correlation between Protein Sequence Similarity and Semantic Similarity Using Gene Ontology Annotations.

    Science.gov (United States)

    Ikram, Najmul; Qadir, Muhammad Abdul; Afzal, Muhammad Tanvir

    2018-01-01

    Sequence similarity is a commonly used measure to compare proteins. With the increasing use of ontologies, semantic (function) similarity is getting importance. The correlation between these measures has been applied in the evaluation of new semantic similarity methods, and in protein function prediction. In this research, we investigate the relationship between the two similarity methods. The results suggest absence of a strong correlation between sequence and semantic similarities. There is a large number of proteins with low sequence similarity and high semantic similarity. We observe that Pearson's correlation coefficient is not sufficient to explain the nature of this relationship. Interestingly, the term semantic similarity values above 0 and below 1 do not seem to play a role in improving the correlation. That is, the correlation coefficient depends only on the number of common GO terms in proteins under comparison, and the semantic similarity measurement method does not influence it. Semantic similarity and sequence similarity have a distinct behavior. These findings are of significant effect for future works on protein comparison, and will help understand the semantic similarity between proteins in a better way.

  16. Improved models of dense anharmonic lattices

    Energy Technology Data Exchange (ETDEWEB)

    Rosenau, P., E-mail: rosenau@post.tau.ac.il; Zilburg, A.

    2017-01-15

    We present two improved quasi-continuous models of dense, strictly anharmonic chains. The direct expansion which includes the leading effect due to lattice dispersion, results in a Boussinesq-type PDE with a compacton as its basic solitary mode. Without increasing its complexity we improve the model by including additional terms in the expanded interparticle potential with the resulting compacton having a milder singularity at its edges. A particular care is applied to the Hertz potential due to its non-analyticity. Since, however, the PDEs of both the basic and the improved model are ill posed, they are unsuitable for a study of chains dynamics. Using the bond length as a state variable we manipulate its dispersion and derive a well posed fourth order PDE. - Highlights: • An improved PDE model of a Newtonian lattice renders compacton solutions. • Compactons are classical solutions of the improved model and hence amenable to standard analysis. • An alternative well posed model enables to study head on interactions of lattices' solitary waves. • Well posed modeling of Hertz potential.

  17. Pulmonary parenchyma segmentation in thin CT image sequences with spectral clustering and geodesic active contour model based on similarity

    Science.gov (United States)

    He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan

    2017-07-01

    While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.

  18. Driving clinical study efficiency by using a productivity breakdown model: comparative evaluation of a global clinical study and a similar Japanese study.

    Science.gov (United States)

    Takahashi, K; Sengoku, S; Kimura, H

    2011-02-01

    A fundamental management imperative of pharmaceutical companies is to contain surging costs of developing and launching drugs globally. Clinical studies are a research and development (R&D) cost driver. The objective of this study was to develop a productivity breakdown model, or a key performance indicator (KPI) tree, for an entire clinical study and to use it to compare a global clinical study with a similar Japanese study. We, thereby, hope to identify means of improving study productivity. We developed the new clinical study productivity breakdown model, covering operational aspects and cost factors. Elements for improving clinical study productivity were assessed from a management viewpoint by comparing empirical tracking data from a global clinical study with a Japanese study with similar protocols. The following unique and material differences, beyond simple international difference in cost of living, that could affect the efficiency of future clinical trials were identified: (i) more frequent site visits in the Japanese study, (ii) head counts at the Japanese study sites more than double those of the global study and (iii) a shorter enrollment time window of about a third that of the global study at the Japanese study sites. We identified major differences in the performance of the two studies. These findings demonstrate the potential of the KPI tree for improving clinical study productivity. Trade-offs, such as those between reduction in head count at study sites and expansion of the enrollment time window, must be considered carefully. © 2010 Blackwell Publishing Ltd.

  19. Improved steamflood analytical model

    Energy Technology Data Exchange (ETDEWEB)

    Chandra, S.; Mamora, D.D. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Texas A and M Univ., TX (United States)

    2005-11-01

    Predicting the performance of steam flooding can help in the proper execution of enhanced oil recovery (EOR) processes. The Jones model is often used for analytical steam flooding performance prediction, but it does not accurately predict oil production peaks. In this study, an improved steam flood model was developed by modifying 2 of the 3 components of the capture factor in the Jones model. The modifications were based on simulation results from a Society of Petroleum Engineers (SPE) comparative project case model. The production performance of a 5-spot steamflood pattern unit was simulated and compared with results obtained from the Jones model. Three reservoir types were simulated through the use of 3-D Cartesian black oil models. In order to correlate the simulation and the Jones analytical model results for the start and height of the production peak, the dimensionless steam zone size was modified to account for a decrease in oil viscosity during steam flooding and its dependence on the steam injection rate. In addition, the dimensionless volume of displaced oil produced was modified from its square-root format to an exponential form. The modified model improved results for production performance by up to 20 years of simulated steam flooding, compared to the Jones model. Results agreed with simulation results for 13 different cases, including 3 different sets of reservoir and fluid properties. Reservoir engineers will benefit from the improved accuracy of the model. Oil displacement calculations were based on methods proposed in earlier research, in which the oil displacement rate is a function of cumulative oil steam ratio. The cumulative oil steam ratio is a function of overall thermal efficiency. Capture factor component formulae were presented, as well as charts of oil production rates and cumulative oil-steam ratios for various reservoirs. 13 refs., 4 tabs., 29 figs.

  20. An improved DPSO with mutation based on similarity algorithm for optimization of transmission lines loading

    International Nuclear Information System (INIS)

    Shayeghi, H.; Mahdavi, M.; Bagheri, A.

    2010-01-01

    Static transmission network expansion planning (STNEP) problem acquires a principal role in power system planning and should be evaluated carefully. Up till now, various methods have been presented to solve the STNEP problem. But only in one of them, lines adequacy rate has been considered at the end of planning horizon and the problem has been optimized by discrete particle swarm optimization (DPSO). DPSO is a new population-based intelligence algorithm and exhibits good performance on solution of the large-scale, discrete and non-linear optimization problems like STNEP. However, during the running of the algorithm, the particles become more and more similar, and cluster into the best particle in the swarm, which make the swarm premature convergence around the local solution. In order to overcome these drawbacks and considering lines adequacy rate, in this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using an improved DPSO algorithm. The proposed improved DPSO is a new conception, collectivity, which is based on similarity between the particle and the current global best particle in the swarm that can prevent the premature convergence of DPSO around the local solution. The proposed method has been tested on the Garver's network and a real transmission network in Iran, and compared with the DPSO based method for solution of the TNEP problem. The results show that the proposed improved DPSO based method by preventing the premature convergence is caused that with almost the same expansion costs, the network adequacy is increased considerably. Also, regarding the convergence curves of both methods, it can be seen that precision of the proposed algorithm for the solution of the STNEP problem is more than DPSO approach.

  1. Estimating Evapotranspiration from an Improved Two-Source Energy Balance Model Using ASTER Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Qifeng Zhuang

    2015-11-01

    Full Text Available Reliably estimating the turbulent fluxes of latent and sensible heat at the Earth’s surface by remote sensing is important for research on the terrestrial hydrological cycle. This paper presents a practical approach for mapping surface energy fluxes using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER images from an improved two-source energy balance (TSEB model. The original TSEB approach may overestimate latent heat flux under vegetative stress conditions, as has also been reported in recent research. We replaced the Priestley-Taylor equation used in the original TSEB model with one that uses plant moisture and temperature constraints based on the PT-JPL model to obtain a more accurate canopy latent heat flux for model solving. The collected ASTER data and field observations employed in this study are over corn fields in arid regions of the Heihe Watershed Allied Telemetry Experimental Research (HiWATER area, China. The results were validated by measurements from eddy covariance (EC systems, and the surface energy flux estimates of the improved TSEB model are similar to the ground truth. A comparison of the results from the original and improved TSEB models indicates that the improved method more accurately estimates the sensible and latent heat fluxes, generating more precise daily evapotranspiration (ET estimate under vegetative stress conditions.

  2. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  3. Brief Communication: Upper Air Relaxation in RACMO2 Significantly Improves Modelled Interannual Surface Mass Balance Variability in Antarctica

    Science.gov (United States)

    van de Berg, W. J.; Medley, B.

    2016-01-01

    The Regional Atmospheric Climate Model (RACMO2) has been a powerful tool for improving surface mass balance (SMB) estimates from GCMs or reanalyses. However, new yearly SMB observations for West Antarctica show that the modelled interannual variability in SMB is poorly simulated by RACMO2, in contrast to ERA-Interim, which resolves this variability well. In an attempt to remedy RACMO2 performance, we included additional upper-air relaxation (UAR) in RACMO2. With UAR, the correlation to observations is similar for RACMO2 and ERA-Interim. The spatial SMB patterns and ice-sheet-integrated SMB modelled using UAR remain very similar to the estimates of RACMO2 without UAR. We only observe an upstream smoothing of precipitation in regions with very steep topography like the Antarctic Peninsula. We conclude that UAR is a useful improvement for regional climate model simulations, although results in regions with steep topography should be treated with care.

  4. Improving surgeon utilization in an orthopedic department using simulation modeling

    Directory of Open Access Journals (Sweden)

    Simwita YW

    2016-10-01

    Full Text Available Yusta W Simwita, Berit I Helgheim Department of Logistics, Molde University College, Molde, Norway Purpose: Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time.Methods: The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization.Results: The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services.Conclusion: This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. Keywords: waiting time, patient, health care process

  5. Uncovering highly obfuscated plagiarism cases using fuzzy semantic-based similarity model

    Directory of Open Access Journals (Sweden)

    Salha M. Alzahrani

    2015-07-01

    Full Text Available Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and suspicious texts of short lengths, which implement the semantic relatedness between words as a membership function to a fuzzy set. In order to minimize the number of false positives and false negatives, a learning method that combines a permission threshold and a variation threshold is used to decide true plagiarism cases. The proposed model and the baselines are evaluated on 99,033 ground-truth annotated cases extracted from different datasets, including 11,621 (11.7% handmade paraphrases, 54,815 (55.4% artificial plagiarism cases, and 32,578 (32.9% plagiarism-free cases. We conduct extensive experimental verifications, including the study of the effects of different segmentations schemes and parameter settings. Results are assessed using precision, recall, F-measure and granularity on stratified 10-fold cross-validation data. The statistical analysis using paired t-tests shows that the proposed approach is statistically significant in comparison with the baselines, which demonstrates the competence of fuzzy semantic-based model to detect plagiarism cases beyond the literal plagiarism. Additionally, the analysis of variance (ANOVA statistical test shows the effectiveness of different segmentation schemes used with the proposed approach.

  6. Modeling of scroll compressors - Improvements

    Energy Technology Data Exchange (ETDEWEB)

    Duprez, Marie-Eve; Dumont, Eric; Frere, Marc [Thermodynamics Department, Universite de Mons - Faculte Polytechnique, 31 bd Dolez, 7000 Mons (Belgium)

    2010-06-15

    This paper presents an improvement of the scroll compressors model previously published by. This improved model allows the calculation of refrigerant mass flow rate, power consumption and heat flow rate that would be released at the condenser of a heat pump equipped with the compressor, from the knowledge of operating conditions and parameters. Both basic and improved models have been tested on scroll compressors using different refrigerants. This study has been limited to compressors with a maximum electrical power of 14 kW and for evaporation temperatures ranging from -40 to 15 C and condensation temperatures from 10 to 75 C. The average discrepancies on mass flow rate, power consumption and heat flow rate are respectively 0.50%, 0.93% and 3.49%. Using a global parameter determination (based on several refrigerants data), this model can predict the behavior of a compressor with another fluid for which no manufacturer data are available. (author)

  7. Self-similar pattern formation and continuous mechanics of self-similar systems

    Directory of Open Access Journals (Sweden)

    A. V. Dyskin

    2007-01-01

    Full Text Available In many cases, the critical state of systems that reached the threshold is characterised by self-similar pattern formation. We produce an example of pattern formation of this kind – formation of self-similar distribution of interacting fractures. Their formation starts with the crack growth due to the action of stress fluctuations. It is shown that even when the fluctuations have zero average the cracks generated by them could grow far beyond the scale of stress fluctuations. Further development of the fracture system is controlled by crack interaction leading to the emergence of self-similar crack distributions. As a result, the medium with fractures becomes discontinuous at any scale. We develop a continuum fractal mechanics to model its physical behaviour. We introduce a continuous sequence of continua of increasing scales covering this range of scales. The continuum of each scale is specified by the representative averaging volume elements of the corresponding size. These elements determine the resolution of the continuum. Each continuum hides the cracks of scales smaller than the volume element size while larger fractures are modelled explicitly. Using the developed formalism we investigate the stability of self-similar crack distributions with respect to crack growth and show that while the self-similar distribution of isotropically oriented cracks is stable, the distribution of parallel cracks is not. For the isotropically oriented cracks scaling of permeability is determined. For permeable materials (rocks with self-similar crack distributions permeability scales as cube of crack radius. This property could be used for detecting this specific mechanism of formation of self-similar crack distributions.

  8. Assessing intrinsic and specific vulnerability models ability to indicate groundwater vulnerability to groups of similar pesticides: A comparative study

    Science.gov (United States)

    Douglas, Steven; Dixon, Barnali; Griffin, Dale W.

    2018-01-01

    With continued population growth and increasing use of fresh groundwater resources, protection of this valuable resource is critical. A cost effective means to assess risk of groundwater contamination potential will provide a useful tool to protect these resources. Integrating geospatial methods offers a means to quantify the risk of contaminant potential in cost effective and spatially explicit ways. This research was designed to compare the ability of intrinsic (DRASTIC) and specific (Attenuation Factor; AF) vulnerability models to indicate groundwater vulnerability areas by comparing model results to the presence of pesticides from groundwater sample datasets. A logistic regression was used to assess the relationship between the environmental variables and the presence or absence of pesticides within regions of varying vulnerability. According to the DRASTIC model, more than 20% of the study area is very highly vulnerable. Approximately 30% is very highly vulnerable according to the AF model. When groundwater concentrations of individual pesticides were compared to model predictions, the results were mixed. Model predictability improved when concentrations of the group of similar pesticides were compared to model results. Compared to the DRASTIC model, the AF model more accurately predicts the distribution of the number of contaminated wells within each vulnerability class.

  9. Prioritization of candidate disease genes by combining topological similarity and semantic similarity.

    Science.gov (United States)

    Liu, Bin; Jin, Min; Zeng, Pan

    2015-10-01

    The identification of gene-phenotype relationships is very important for the treatment of human diseases. Studies have shown that genes causing the same or similar phenotypes tend to interact with each other in a protein-protein interaction (PPI) network. Thus, many identification methods based on the PPI network model have achieved good results. However, in the PPI network, some interactions between the proteins encoded by candidate gene and the proteins encoded by known disease genes are very weak. Therefore, some studies have combined the PPI network with other genomic information and reported good predictive performances. However, we believe that the results could be further improved. In this paper, we propose a new method that uses the semantic similarity between the candidate gene and known disease genes to set the initial probability vector of a random walk with a restart algorithm in a human PPI network. The effectiveness of our method was demonstrated by leave-one-out cross-validation, and the experimental results indicated that our method outperformed other methods. Additionally, our method can predict new causative genes of multifactor diseases, including Parkinson's disease, breast cancer and obesity. The top predictions were good and consistent with the findings in the literature, which further illustrates the effectiveness of our method. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Similarity search of business process models

    NARCIS (Netherlands)

    Dumas, M.; García-Bañuelos, L.; Dijkman, R.M.

    2009-01-01

    Similarity search is a general class of problems in which a given object, called a query object, is compared against a collection of objects in order to retrieve those that most closely resemble the query object. This paper reviews recent work on an instance of this class of problems, where the

  11. Personalized recommendation with corrected similarity

    International Nuclear Information System (INIS)

    Zhu, Xuzhen; Tian, Hui; Cai, Shimin

    2014-01-01

    Personalized recommendation has attracted a surge of interdisciplinary research. Especially, similarity-based methods in applications of real recommendation systems have achieved great success. However, the computations of similarities are overestimated or underestimated, in particular because of the defective strategy of unidirectional similarity estimation. In this paper, we solve this drawback by leveraging mutual correction of forward and backward similarity estimations, and propose a new personalized recommendation index, i.e., corrected similarity based inference (CSI). Through extensive experiments on four benchmark datasets, the results show a greater improvement of CSI in comparison with these mainstream baselines. And a detailed analysis is presented to unveil and understand the origin of such difference between CSI and mainstream indices. (paper)

  12. Re-engineering pre-employment check-up systems: a model for improving health services.

    Science.gov (United States)

    Rateb, Said Abdel Hakim; El Nouman, Azza Abdel Razek; Rateb, Moshira Abdel Hakim; Asar, Mohamed Naguib; El Amin, Ayman Mohammed; Gad, Saad abdel Aziz; Mohamed, Mohamed Salah Eldin

    2011-01-01

    The purpose of this paper is to develop a model for improving health services provided by the pre-employment medical fitness check-up system affiliated to Egypt's Health Insurance Organization (HIO). Operations research, notably system re-engineering, is used in six randomly selected centers and findings before and after re-engineering are compared. The re-engineering model follows a systems approach, focusing on three areas: structure, process and outcome. The model is based on six main components: electronic booking, standardized check-up processes, protected medical documents, advanced archiving through an electronic content management (ECM) system, infrastructure development, and capacity building. The model originates mainly from customer needs and expectations. The centers' monthly customer flow increased significantly after re-engineering. The mean time spent per customer cycle improved after re-engineering--18.3 +/- 5.5 minutes as compared to 48.8 +/- 14.5 minutes before. Appointment delay was also significantly decreased from an average 18 to 6.2 days. Both beneficiaries and service providers were significantly more satisfied with the services after re-engineering. The model proves that re-engineering program costs are exceeded by increased revenue. Re-engineering in this study involved multiple structure and process elements. The literature review did not reveal similar re-engineering healthcare packages. Therefore, each element was compared separately. This model is highly recommended for improving service effectiveness and efficiency. This research is the first in Egypt to apply the re-engineering approach to public health systems. Developing user-friendly models for service improvement is an added value.

  13. Computational prediction of drug-drug interactions based on drugs functional similarities.

    Science.gov (United States)

    Ferdousi, Reza; Safdari, Reza; Omidi, Yadollah

    2017-06-01

    Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs. Copyright © 2017. Published by Elsevier Inc.

  14. A self-similar model for conduction in the plasma erosion opening switch

    International Nuclear Information System (INIS)

    Mosher, D.; Grossmann, J.M.; Ottinger, P.F.; Colombant, D.G.

    1987-01-01

    The conduction phase of the plasma erosion opening switch (PEOS) is characterized by combining a 1-D fluid model for plasma hydrodynamics, Maxwell's equations, and a 2-D electron-orbit analysis. A self-similar approximation for the plasma and field variables permits analytic expressions for their space and time variations to be derived. It is shown that a combination of axial MHD compression and magnetic insulation of high-energy electrons emitted from the switch cathode can control the character of switch conduction. The analysis highlights the need to include additional phenomena for accurate fluid modeling of PEOS conduction

  15. Self-similar solution for coupled thermal electromagnetic model ...

    African Journals Online (AJOL)

    An investigation into the existence and uniqueness solution of self-similar solution for the coupled Maxwell and Pennes Bio-heat equations have been done. Criteria for existence and uniqueness of self-similar solution are revealed in the consequent theorems. Journal of the Nigerian Association of Mathematical Physics ...

  16. Quasi-Similarity Model of Synthetic Jets

    Czech Academy of Sciences Publication Activity Database

    Tesař, Václav; Kordík, Jozef

    2009-01-01

    Roč. 149, č. 2 (2009), s. 255-265 ISSN 0924-4247 R&D Projects: GA AV ČR IAA200760705; GA ČR GA101/07/1499 Institutional research plan: CEZ:AV0Z20760514 Keywords : jets * synthetic jets * similarity solution Subject RIV: BK - Fluid Dynamics Impact factor: 1.674, year: 2009 http://www.sciencedirect.com

  17. Reranking candidate gene models with cross-species comparison for improved gene prediction

    Directory of Open Access Journals (Sweden)

    Pereira Fernando CN

    2008-10-01

    Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.

  18. Stereotype content model across cultures: Towards universal similarities and some differences

    Science.gov (United States)

    Cuddy, Amy J. C.; Fiske, Susan T.; Kwan, Virginia S. Y.; Glick, Peter; Demoulin, Stéphanie; Leyens, Jacques-Philippe; Bond, Michael Harris; Croizet, Jean-Claude; Ellemers, Naomi; Sleebos, Ed; Htun, Tin Tin; Kim, Hyun-Jeong; Maio, Greg; Perry, Judi; Petkova, Kristina; Todorov, Valery; Rodríguez-Bailón, Rosa; Morales, Elena; Moya, Miguel; Palacios, Marisol; Smith, Vanessa; Perez, Rolando; Vala, Jorge; Ziegler, Rene

    2014-01-01

    The stereotype content model (SCM) proposes potentially universal principles of societal stereotypes and their relation to social structure. Here, the SCM reveals theoretically grounded, cross-cultural, cross-groups similarities and one difference across 10 non-US nations. Seven European (individualist) and three East Asian (collectivist) nations (N = 1, 028) support three hypothesized cross-cultural similarities: (a) perceived warmth and competence reliably differentiate societal group stereotypes; (b) many out-groups receive ambivalent stereotypes (high on one dimension; low on the other); and (c) high status groups stereotypically are competent, whereas competitive groups stereotypically lack warmth. Data uncover one consequential cross-cultural difference: (d) the more collectivist cultures do not locate reference groups (in-groups and societal prototype groups) in the most positive cluster (high-competence/high-warmth), unlike individualist cultures. This demonstrates out-group derogation without obvious reference-group favouritism. The SCM can serve as a pancultural tool for predicting group stereotypes from structural relations with other groups in society, and comparing across societies. PMID:19178758

  19. Towards Modelling Variation in Music as Foundation for Similarity

    NARCIS (Netherlands)

    Volk, A.; de Haas, W.B.; van Kranenburg, P.; Cambouropoulos, E.; Tsougras, C.; Mavromatis, P.; Pastiadis, K.

    2012-01-01

    This paper investigates the concept of variation in music from the perspective of music similarity. Music similarity is a central concept in Music Information Retrieval (MIR), however there exists no comprehensive approach to music similarity yet. As a consequence, MIR faces the challenge on how to

  20. Conservation of connectivity of model-space effective interactions under a class of similarity transformation

    International Nuclear Information System (INIS)

    Duan Changkui; Gong Yungui; Dong Huining; Reid, Michael F.

    2004-01-01

    Effective interaction operators usually act on a restricted model space and give the same energies (for Hamiltonian) and matrix elements (for transition operators, etc.) as those of the original operators between the corresponding true eigenstates. Various types of effective operators are possible. Those well defined effective operators have been shown to be related to each other by similarity transformation. Some of the effective operators have been shown to have connected-diagram expansions. It is shown in this paper that under a class of very general similarity transformations, the connectivity is conserved. The similarity transformation between Hermitian and non-Hermitian Rayleigh-Schroedinger perturbative effective operators is one of such transformations and hence the connectivity can be deducted from each other

  1. Conservation of connectivity of model-space effective interactions under a class of similarity transformation.

    Science.gov (United States)

    Duan, Chang-Kui; Gong, Yungui; Dong, Hui-Ning; Reid, Michael F

    2004-09-15

    Effective interaction operators usually act on a restricted model space and give the same energies (for Hamiltonian) and matrix elements (for transition operators, etc.) as those of the original operators between the corresponding true eigenstates. Various types of effective operators are possible. Those well defined effective operators have been shown to be related to each other by similarity transformation. Some of the effective operators have been shown to have connected-diagram expansions. It is shown in this paper that under a class of very general similarity transformations, the connectivity is conserved. The similarity transformation between Hermitian and non-Hermitian Rayleigh-Schrodinger perturbative effective operators is one of such transformations and hence the connectivity can be deducted from each other.

  2. Agile rediscovering values: Similarities to continuous improvement strategies

    Science.gov (United States)

    Díaz de Mera, P.; Arenas, J. M.; González, C.

    2012-04-01

    Research in the late 80's on technological companies that develop products of high value innovation, with sufficient speed and flexibility to adapt quickly to changing market conditions, gave rise to the new set of methodologies known as Agile Management Approach. In the current changing economic scenario, we considered very interesting to study the similarities of these Agile Methodologies with other practices whose effectiveness has been amply demonstrated in both the West and Japan. Strategies such as Kaizen, Lean, World Class Manufacturing, Concurrent Engineering, etc, would be analyzed to check the values they have in common with the Agile Approach.

  3. Applications of Analytical Self-Similar Solutions of Reynolds-Averaged Models for Instability-Induced Turbulent Mixing

    Science.gov (United States)

    Hartland, Tucker; Schilling, Oleg

    2017-11-01

    Analytical self-similar solutions to several families of single- and two-scale, eddy viscosity and Reynolds stress turbulence models are presented for Rayleigh-Taylor, Richtmyer-Meshkov, and Kelvin-Helmholtz instability-induced turbulent mixing. The use of algebraic relationships between model coefficients and physical observables (e.g., experimental growth rates) following from the self-similar solutions to calibrate a member of a given family of turbulence models is shown. It is demonstrated numerically that the algebraic relations accurately predict the value and variation of physical outputs of a Reynolds-averaged simulation in flow regimes that are consistent with the simplifying assumptions used to derive the solutions. The use of experimental and numerical simulation data on Reynolds stress anisotropy ratios to calibrate a Reynolds stress model is briefly illustrated. The implications of the analytical solutions for future Reynolds-averaged modeling of hydrodynamic instability-induced mixing are briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  4. A grammar-based semantic similarity algorithm for natural language sentences.

    Science.gov (United States)

    Lee, Ming Che; Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to "artificial language", such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.

  5. Exploring similarities among many species distributions

    Science.gov (United States)

    Simmerman, Scott; Wang, Jingyuan; Osborne, James; Shook, Kimberly; Huang, Jian; Godsoe, William; Simons, Theodore R.

    2012-01-01

    Collecting species presence data and then building models to predict species distribution has been long practiced in the field of ecology for the purpose of improving our understanding of species relationships with each other and with the environment. Due to limitations of computing power as well as limited means of using modeling software on HPC facilities, past species distribution studies have been unable to fully explore diverse data sets. We build a system that can, for the first time to our knowledge, leverage HPC to support effective exploration of species similarities in distribution as well as their dependencies on common environmental conditions. Our system can also compute and reveal uncertainties in the modeling results enabling domain experts to make informed judgments about the data. Our work was motivated by and centered around data collection efforts within the Great Smoky Mountains National Park that date back to the 1940s. Our findings present new research opportunities in ecology and produce actionable field-work items for biodiversity management personnel to include in their planning of daily management activities.

  6. Ethnic differences in the effects of media on body image: the effects of priming with ethnically different or similar models.

    Science.gov (United States)

    Bruns, Gina L; Carter, Michele M

    2015-04-01

    Media exposure has been positively correlated with body dissatisfaction. While body image concerns are common, being African American has been found to be a protective factor in the development of body dissatisfaction. Participants either viewed ten advertisements showing 1) ethnically-similar thin models; 2) ethnically-different thin models; 3) ethnically-similar plus-sized models; and 4) ethnically-diverse plus-sized models. Following exposure, body image was measured. African American women had less body dissatisfaction than Caucasian women. Ethnically-similar thin-model conditions did not elicit greater body dissatisfaction scores than ethnically-different thin or plus-sized models nor did the ethnicity of the model impact ratings of body dissatisfaction for women of either race. There were no differences among the African American women exposed to plus-sized versus thin models. Among Caucasian women exposure to plus-sized models resulted in greater body dissatisfaction than exposure to thin models. Results support existing literature that African American women experience less body dissatisfaction than Caucasian women even following exposure to an ethnically-similar thin model. Additionally, women exposed to plus-sized model conditions experienced greater body dissatisfaction than those shown thin models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Improved TOPSIS decision model for NPP emergencies

    International Nuclear Information System (INIS)

    Zhang Jin; Liu Feng; Huang Lian

    2011-01-01

    In this paper,an improved decision model is developed for its use as a tool to respond to emergencies at nuclear power plants. Given the complexity of multi-attribute emergency decision-making on nuclear accident, the improved TOPSIS method is used to build a decision-making model that integrates subjective weight and objective weight of each evaluation index. A comparison between the results of this new model and two traditional methods of fuzzy hierarchy analysis method and weighted analysis method demonstrates that the improved TOPSIS model has a better evaluation effect. (authors)

  8. Technological Similarity, Post-acquisition R&D Reorganization, and Innovation Performance in Horizontal Acquisition

    DEFF Research Database (Denmark)

    Colombo, Massimo G.; Rabbiosi, Larissa

    2014-01-01

    This paper aims to disentangle the mechanisms through which technological similarity between acquiring and acquired firms influences innovation in horizontal acquisitions. We develop a theoretical model that links technological similarity to: (i) two key aspects of post-acquisition reorganization...... of acquired R&D operations – the rationalization of the R&D operations and the replacement of the R&D top manager, and (ii) two intermediate effects that are closely associated with the post-acquisition innovation performance of the combined firm – improvements in R&D productivity and disruptions in R......&D personnel. We rely on PLS techniques to test our theoretical model using detailed information on 31 horizontal acquisitions in high- and medium-tech industries. Our results indicate that in horizontal acquisitions, technological similarity negatively affects post-acquisition innovation performance...

  9. PubMed-supported clinical term weighting approach for improving inter-patient similarity measure in diagnosis prediction.

    Science.gov (United States)

    Chan, Lawrence Wc; Liu, Ying; Chan, Tao; Law, Helen Kw; Wong, S C Cesar; Yeung, Andy Ph; Lo, K F; Yeung, S W; Kwok, K Y; Chan, William Yl; Lau, Thomas Yh; Shyu, Chi-Ren

    2015-06-02

    Similarity-based retrieval of Electronic Health Records (EHRs) from large clinical information systems provides physicians the evidence support in making diagnoses or referring examinations for the suspected cases. Clinical Terms in EHRs represent high-level conceptual information and the similarity measure established based on these terms reflects the chance of inter-patient disease co-occurrence. The assumption that clinical terms are equally relevant to a disease is unrealistic, reducing the prediction accuracy. Here we propose a term weighting approach supported by PubMed search engine to address this issue. We collected and studied 112 abdominal computed tomography imaging examination reports from four hospitals in Hong Kong. Clinical terms, which are the image findings related to hepatocellular carcinoma (HCC), were extracted from the reports. Through two systematic PubMed search methods, the generic and specific term weightings were established by estimating the conditional probabilities of clinical terms given HCC. Each report was characterized by an ontological feature vector and there were totally 6216 vector pairs. We optimized the modified direction cosine (mDC) with respect to a regularization constant embedded into the feature vector. Equal, generic and specific term weighting approaches were applied to measure the similarity of each pair and their performances for predicting inter-patient co-occurrence of HCC diagnoses were compared by using Receiver Operating Characteristics (ROC) analysis. The Areas under the curves (AUROCs) of similarity scores based on equal, generic and specific term weighting approaches were 0.735, 0.728 and 0.743 respectively (p PubMed. Our findings suggest that the optimized similarity measure with specific term weighting to EHRs can improve significantly the accuracy for predicting the inter-patient co-occurrence of diagnosis when compared with equal and generic term weighting approaches.

  10. Cosmological model with anisotropic dark energy and self-similarity of the second kind

    International Nuclear Information System (INIS)

    Brandt, Carlos F. Charret; Silva, Maria de Fatima A. da; Rocha, Jaime F. Villas da; Chan, Roberto

    2006-01-01

    We study the evolution of an anisotropic fluid with self-similarity of the second kind. We found a class of solution to the Einstein field equations by assuming an equation of state where the radial pressure of the fluid is proportional to its energy density (p r =ωρ) and that the fluid moves along time-like geodesics. The equation of state and the anisotropy with self-similarity of second kind imply ω = -1. The energy conditions, geometrical and physical properties of the solutions are studied. We have found that for the parameter α=-1/2 , it may represent a Big Rip cosmological model. (author)

  11. An improved interfacial bonding model for material interface modeling

    Science.gov (United States)

    Lin, Liqiang; Wang, Xiaodu; Zeng, Xiaowei

    2016-01-01

    An improved interfacial bonding model was proposed from potential function point of view to investigate interfacial interactions in polycrystalline materials. It characterizes both attractive and repulsive interfacial interactions and can be applied to model different material interfaces. The path dependence of work-of-separation study indicates that the transformation of separation work is smooth in normal and tangential direction and the proposed model guarantees the consistency of the cohesive constitutive model. The improved interfacial bonding model was verified through a simple compression test in a standard hexagonal structure. The error between analytical solutions and numerical results from the proposed model is reasonable in linear elastic region. Ultimately, we investigated the mechanical behavior of extrafibrillar matrix in bone and the simulation results agreed well with experimental observations of bone fracture. PMID:28584343

  12. Gravity model improvement investigation. [improved gravity model for determination of ocean geoid

    Science.gov (United States)

    Siry, J. W.; Kahn, W. D.; Bryan, J. W.; Vonbun, F. F.

    1973-01-01

    This investigation was undertaken to improve the gravity model and hence the ocean geoid. A specific objective is the determination of the gravity field and geoid with a space resolution of approximately 5 deg and a height resolution of the order of five meters. The concept of the investigation is to utilize both GEOS-C altimeter and satellite-to-satellite tracking data to achieve the gravity model improvement. It is also planned to determine the geoid in selected regions with a space resolution of about a degree and a height resolution of the order of a meter or two. The short term objectives include the study of the gravity field in the GEOS-C calibration area outlined by Goddard, Bermuda, Antigua, and Cape Kennedy, and also in the eastern Pacific area which is viewed by ATS-F.

  13. Biodiversity and Climate Modeling Workshop Series: Identifying gaps and needs for improving large-scale biodiversity models

    Science.gov (United States)

    Weiskopf, S. R.; Myers, B.; Beard, T. D.; Jackson, S. T.; Tittensor, D.; Harfoot, M.; Senay, G. B.

    2017-12-01

    At the global scale, well-accepted global circulation models and agreed-upon scenarios for future climate from the Intergovernmental Panel on Climate Change (IPCC) are available. In contrast, biodiversity modeling at the global scale lacks analogous tools. While there is great interest in development of similar bodies and efforts for international monitoring and modelling of biodiversity at the global scale, equivalent modelling tools are in their infancy. This lack of global biodiversity models compared to the extensive array of general circulation models provides a unique opportunity to bring together climate, ecosystem, and biodiversity modeling experts to promote development of integrated approaches in modeling global biodiversity. Improved models are needed to understand how we are progressing towards the Aichi Biodiversity Targets, many of which are not on track to meet the 2020 goal, threatening global biodiversity conservation, monitoring, and sustainable use. We brought together biodiversity, climate, and remote sensing experts to try to 1) identify lessons learned from the climate community that can be used to improve global biodiversity models; 2) explore how NASA and other remote sensing products could be better integrated into global biodiversity models and 3) advance global biodiversity modeling, prediction, and forecasting to inform the Aichi Biodiversity Targets, the 2030 Sustainable Development Goals, and the Intergovernmental Platform on Biodiversity and Ecosystem Services Global Assessment of Biodiversity and Ecosystem Services. The 1st In-Person meeting focused on determining a roadmap for effective assessment of biodiversity model projections and forecasts by 2030 while integrating and assimilating remote sensing data and applying lessons learned, when appropriate, from climate modeling. Here, we present the outcomes and lessons learned from our first E-discussion and in-person meeting and discuss the next steps for future meetings.

  14. A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities

    Science.gov (United States)

    Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.

    2017-12-01

    The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.

  15. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  16. Personalized mortality prediction driven by electronic medical data and a patient similarity metric.

    Science.gov (United States)

    Lee, Joon; Maslove, David M; Dubin, Joel A

    2015-01-01

    Clinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1) to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2) to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made. We deployed a cosine-similarity-based patient similarity metric (PSM) to an intensive care unit (ICU) database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care. The present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR) systems, our novel medical data analytics contributes to

  17. Personalized mortality prediction driven by electronic medical data and a patient similarity metric.

    Directory of Open Access Journals (Sweden)

    Joon Lee

    Full Text Available Clinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1 to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2 to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made.We deployed a cosine-similarity-based patient similarity metric (PSM to an intensive care unit (ICU database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care.The present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR systems, our novel medical data analytics

  18. Similar Symmetries: The Role of Wallpaper Groups in Perceptual Texture Similarity

    Directory of Open Access Journals (Sweden)

    Fraser Halley

    2011-05-01

    Full Text Available Periodic patterns and symmetries are striking visual properties that have been used decoratively around the world throughout human history. Periodic patterns can be mathematically classified into one of 17 different Wallpaper groups, and while computational models have been developed which can extract an image's symmetry group, very little work has been done on how humans perceive these patterns. This study presents the results from a grouping experiment using stimuli from the different wallpaper groups. We find that while different images from the same wallpaper group are perceived as similar to one another, not all groups have the same degree of self-similarity. The similarity relationships between wallpaper groups appear to be dominated by rotations.

  19. A Model-Based Approach to Constructing Music Similarity Functions

    Directory of Open Access Journals (Sweden)

    Lamere Paul

    2007-01-01

    Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  20. A Model-Based Approach to Constructing Music Similarity Functions

    Science.gov (United States)

    West, Kris; Lamere, Paul

    2006-12-01

    Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  1. The baryonic self similarity of dark matter

    International Nuclear Information System (INIS)

    Alard, C.

    2014-01-01

    The cosmological simulations indicates that dark matter halos have specific self-similar properties. However, the halo similarity is affected by the baryonic feedback. By using momentum-driven winds as a model to represent the baryon feedback, an equilibrium condition is derived which directly implies the emergence of a new type of similarity. The new self-similar solution has constant acceleration at a reference radius for both dark matter and baryons. This model receives strong support from the observations of galaxies. The new self-similar properties imply that the total acceleration at larger distances is scale-free, the transition between the dark matter and baryons dominated regime occurs at a constant acceleration, and the maximum amplitude of the velocity curve at larger distances is proportional to M 1/4 . These results demonstrate that this self-similar model is consistent with the basics of modified Newtonian dynamics (MOND) phenomenology. In agreement with the observations, the coincidence between the self-similar model and MOND breaks at the scale of clusters of galaxies. Some numerical experiments show that the behavior of the density near the origin is closely approximated by a Einasto profile.

  2. A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences

    Science.gov (United States)

    Chang, Jia Wei; Hsieh, Tung Cheng

    2014-01-01

    This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure. PMID:24982952

  3. A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences

    Directory of Open Access Journals (Sweden)

    Ming Che Lee

    2014-01-01

    Full Text Available This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.

  4. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  5. Scaling and interaction of self-similar modes in models of high Reynolds number wall turbulence.

    Science.gov (United States)

    Sharma, A S; Moarref, R; McKeon, B J

    2017-03-13

    Previous work has established the usefulness of the resolvent operator that maps the terms nonlinear in the turbulent fluctuations to the fluctuations themselves. Further work has described the self-similarity of the resolvent arising from that of the mean velocity profile. The orthogonal modes provided by the resolvent analysis describe the wall-normal coherence of the motions and inherit that self-similarity. In this contribution, we present the implications of this similarity for the nonlinear interaction between modes with different scales and wall-normal locations. By considering the nonlinear interactions between modes, it is shown that much of the turbulence scaling behaviour in the logarithmic region can be determined from a single arbitrarily chosen reference plane. Thus, the geometric scaling of the modes is impressed upon the nonlinear interaction between modes. Implications of these observations on the self-sustaining mechanisms of wall turbulence, modelling and simulation are outlined.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  6. Consequences of team charter quality: Teamwork mental model similarity and team viability in engineering design student teams

    Science.gov (United States)

    Conway Hughston, Veronica

    Since 1996 ABET has mandated that undergraduate engineering degree granting institutions focus on learning outcomes such as professional skills (i.e. solving unstructured problems and working in teams). As a result, engineering curricula were restructured to include team based learning---including team charters. Team charters were diffused into engineering education as one of many instructional activities to meet the ABET accreditation mandates. However, the implementation and execution of team charters into engineering team based classes has been inconsistent and accepted without empirical evidence of the consequences. The purpose of the current study was to investigate team effectiveness, operationalized as team viability, as an outcome of team charter implementation in an undergraduate engineering team based design course. Two research questions were the focus of the study: a) What is the relationship between team charter quality and viability in engineering student teams, and b) What is the relationship among team charter quality, teamwork mental model similarity, and viability in engineering student teams? Thirty-eight intact teams, 23 treatment and 15 comparison, participated in the investigation. Treatment teams attended a team charter lecture, and completed a team charter homework assignment. Each team charter was assessed and assigned a quality score. Comparison teams did not join the lecture, and were not asked to create a team charter. All teams completed each data collection phase: a) similarity rating pretest; b) similarity posttest; and c) team viability survey. Findings indicate that team viability was higher in teams that attended the lecture and completed the charter assignment. Teams with higher quality team charter scores reported higher levels of team viability than teams with lower quality charter scores. Lastly, no evidence was found to support teamwork mental model similarity as a partial mediator of the team charter quality on team viability

  7. An improved gravity model for Mars: Goddard Mars Model 1

    Science.gov (United States)

    Smith, D. E.; Lerch, F. J.; Nerem, R. S.; Zuber, M. T.; Patel, G. B.; Fricke, S. K.; Lemoine, F. G.

    1993-01-01

    Doppler tracking data of three orbiting spacecraft have been reanalyzed to develop a new gravitational field model for the planet Mars, Goddard Mars Model 1 (GMM-1). This model employs nearly all available data, consisting of approximately 1100 days of S band tracking data collected by NASA's Deep Space Network from the Mariner 9 and Viking 1 and Viking 2 spacecraft, in seven different orbits, between 1971 and 1979. GMM-1 is complete to spherical harmonic degree and order 50, which corresponds to a half-wavelength spatial resolution of 200-300 km where the data permit. GMM-1 represents satellite orbits with considerably better accuracy than previous Mars gravity models and shows greater resolution of identifiable geological structures. The notable improvement in GMM-1 over previous models is a consequence of several factors: improved computational capabilities, the use of otpimum weighting and least squares collocation solution techniques which stabilized the behavior of the solution at high degree and order, and the use of longer satellite arcs than employed in previous solutions that were made possible by improved force and measurement models. The inclusion of X band tracking data from the 379-km altitude, nnear-polar orbiting Mars Observer spacecraft should provide a significant improvement over GMM-1, particularly at high latitudes where current data poorly resolve the gravitational signature of the planet.

  8. Centrifugal fans: Similarity, scaling laws, and fan performance

    Science.gov (United States)

    Sardar, Asad Mohammad

    Centrifugal fans are rotodynamic machines used for moving air continuously against moderate pressures through ventilation and air conditioning systems. There are five major topics presented in this thesis: (1) analysis of the fan scaling laws and consequences of dynamic similarity on modelling; (2) detailed flow visualization studies (in water) covering the flow path starting at the fan blade exit to the evaporator core of an actual HVAC fan scroll-diffuser module; (3) mean velocity and turbulence intensity measurements (flow field studies) at the inlet and outlet of large scale blower; (4) fan installation effects on overall fan performance and evaluation of fan testing methods; (5) two point coherence and spectral measurements conducted on an actual HVAC fan module for flow structure identification of possible aeroacoustic noise sources. A major objective of the study was to identity flow structures within the HVAC module that are responsible for noise and in particular "rumble noise" generation. Possible mechanisms for the generation of flow induced noise in the automotive HVAC fan module are also investigated. It is demonstrated that different modes of HVAC operation represent very different internal flow characteristics. This has implications on both fan HVAC airflow performance and noise characteristics. It is demonstrated from principles of complete dynamic similarity that fan scaling laws require that Reynolds, number matching is a necessary condition for developing scale model fans or fan test facilities. The physical basis for the fan scaling laws derived was established from both pure dimensional analysis and also from the fundamental equations of fluid motion. Fan performance was measured in a three times scale model (large scale blower) in air of an actual forward curved automotive HVAC blower. Different fan testing methods (based on AMCA fan test codes) were compared on the basis of static pressure measurements. Also, the flow through an actual HVAC

  9. Individual differences in emotion processing: how similar are diffusion model parameters across tasks?

    Science.gov (United States)

    Mueller, Christina J; White, Corey N; Kuchinke, Lars

    2017-11-27

    The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.

  10. Research on Kalman Filtering Algorithm for Deformation Information Series of Similar Single-Difference Model

    Institute of Scientific and Technical Information of China (English)

    L(U) Wei-cai; XU Shao-quan

    2004-01-01

    Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.

  11. Genome-Wide Expression Profiling of Five Mouse Models Identifies Similarities and Differences with Human Psoriasis

    Science.gov (United States)

    Swindell, William R.; Johnston, Andrew; Carbajal, Steve; Han, Gangwen; Wohn, Christian; Lu, Jun; Xing, Xianying; Nair, Rajan P.; Voorhees, John J.; Elder, James T.; Wang, Xiao-Jing; Sano, Shigetoshi; Prens, Errol P.; DiGiovanni, John; Pittelkow, Mark R.; Ward, Nicole L.; Gudjonsson, Johann E.

    2011-01-01

    Development of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features of the human disease, and standardized validation criteria for psoriasis mouse models have not been widely applied. In this study, whole-genome transcriptional profiling is used to compare gene expression patterns manifested by human psoriatic skin lesions with those that occur in five psoriasis mouse models (K5-Tie2, imiquimod, K14-AREG, K5-Stat3C and K5-TGFbeta1). While the cutaneous gene expression profiles associated with each mouse phenotype exhibited statistically significant similarity to the expression profile of psoriasis in humans, each model displayed distinctive sets of similarities and differences in comparison to human psoriasis. For all five models, correspondence to the human disease was strong with respect to genes involved in epidermal development and keratinization. Immune and inflammation-associated gene expression, in contrast, was more variable between models as compared to the human disease. These findings support the value of all five models as research tools, each with identifiable areas of convergence to and divergence from the human disease. Additionally, the approach used in this paper provides an objective and quantitative method for evaluation of proposed mouse models of psoriasis, which can be strategically applied in future studies to score strengths of mouse phenotypes relative to specific aspects of human psoriasis. PMID:21483750

  12. Hybrid modeling approach to improve the forecasting capability for the gaseous radionuclide in a nuclear site

    International Nuclear Information System (INIS)

    Jeong, Hyojoon; Hwang, Wontae; Kim, Eunhan; Han, Moonhee

    2012-01-01

    Highlights: ► This study is to improve the reliability of air dispersion modeling. ► Tracer experiments assumed gaseous radionuclides were conducted at a nuclear site. ► The performance of a hybrid modeling combined ISC with ANFIS was investigated.. ► Hybrid modeling approach shows better performance rather than a single ISC model. - Abstract: Predicted air concentrations of radioactive materials are important for an environmental impact assessment for the public health. In this study, the performance of a hybrid modeling combined with the industrial source complex (ISC) model and an adaptive neuro-fuzzy inference system (ANFIS) for predicting tracer concentrations was investigated. Tracer dispersion experiments were performed to produce the field data assuming the accidental release of radioactive material. ANFIS was trained in order that the outputs of the ISC model are similar to the measured data. Judging from the higher correlation coefficients between the measured and the calculated ones, the hybrid modeling approach could be an appropriate technique for an improvement of the modeling capability to predict the air concentrations for radioactive materials.

  13. Improvement of MARS code reflood model

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Chung, Bub-Dong

    2011-01-01

    A specifically designed heat transfer model for the reflood process which normally occurs at low flow and low pressure was originally incorporated in the MARS code. The model is essentially identical to that of the RELAP5/MOD3.3 code. The model, however, is known to have under-estimated the peak cladding temperature (PCT) with earlier turn-over. In this study, the original MARS code reflood model is improved. Based on the extensive sensitivity studies for both hydraulic and wall heat transfer models, it is found that the dispersed flow film boiling (DFFB) wall heat transfer is the most influential process determining the PCT, whereas the interfacial drag model most affects the quenching time through the liquid carryover phenomenon. The model proposed by Bajorek and Young is incorporated for the DFFB wall heat transfer. Both space grid and droplet enhancement models are incorporated. Inverted annular film boiling (IAFB) is modeled by using the original PSI model of the code. The flow transition between the DFFB and IABF, is modeled using the TRACE code interpolation. A gas velocity threshold is also added to limit the top-down quenching effect. Assessment calculations are performed for the original and modified MARS codes for the Flecht-Seaset test and RBHT test. Improvements are observed in terms of the PCT and quenching time predictions in the Flecht-Seaset assessment. In case of the RBHT assessment, the improvement over the original MARS code is found marginal. A space grid effect, however, is clearly seen from the modified version of the MARS code. (author)

  14. Self-similar formation of the Kolmogorov spectrum in the Leith model of turbulence

    International Nuclear Information System (INIS)

    Nazarenko, S V; Grebenev, V N

    2017-01-01

    The last stage of evolution toward the stationary Kolmogorov spectrum of hydrodynamic turbulence is studied using the Leith model [1]. This evolution is shown to manifest itself as a reflection wave in the wavenumber space propagating from the largest toward the smallest wavenumbers, and is described by a self-similar solution of a new (third) kind. This stage follows the previously studied stage of an initial explosive propagation of the spectral front from the smallest to the largest wavenumbers reaching arbitrarily large wavenumbers in a finite time, and which was described by a self-similar solution of the second kind [2–4]. Nonstationary solutions corresponding to ‘warm cascades’ characterised by a thermalised spectrum at large wavenumbers are also obtained. (paper)

  15. Block generators for the similarity renormalization group

    Energy Technology Data Exchange (ETDEWEB)

    Huether, Thomas; Roth, Robert [TU Darmstadt (Germany)

    2016-07-01

    The Similarity Renormalization Group (SRG) is a powerful tool to improve convergence behavior of many-body calculations using NN and 3N interactions from chiral effective field theory. The SRG method decouples high and low-energy physics, through a continuous unitary transformation implemented via a flow equation approach. The flow is determined by a generator of choice. This generator governs the decoupling pattern and, thus, the improvement of convergence, but it also induces many-body interactions. Through the design of the generator we can optimize the balance between convergence and induced forces. We explore a new class of block generators that restrict the decoupling to the high-energy sector and leave the diagonalization in the low-energy sector to the many-body method. In this way one expects a suppression of induced forces. We analyze the induced many-body forces and the convergence behavior in light and medium-mass nuclei in No-Core Shell Model and In-Medium SRG calculations.

  16. Modeling soil water content for vegetation modeling improvement

    Science.gov (United States)

    Cianfrani, Carmen; Buri, Aline; Zingg, Barbara; Vittoz, Pascal; Verrecchia, Eric; Guisan, Antoine

    2016-04-01

    Soil water content (SWC) is known to be important for plants as it affects the physiological processes regulating plant growth. Therefore, SWC controls plant distribution over the Earth surface, ranging from deserts and grassland to rain forests. Unfortunately, only a few data on SWC are available as its measurement is very time consuming and costly and needs specific laboratory tools. The scarcity of SWC measurements in geographic space makes it difficult to model and spatially project SWC over larger areas. In particular, it prevents its inclusion in plant species distribution model (SDMs) as predictor. The aims of this study were, first, to test a new methodology allowing problems of the scarcity of SWC measurements to be overpassed and second, to model and spatially project SWC in order to improve plant SDMs with the inclusion of SWC parameter. The study was developed in four steps. First, SWC was modeled by measuring it at 10 different pressures (expressed in pF and ranging from pF=0 to pF=4.2). The different pF represent different degrees of soil water availability for plants. An ensemble of bivariate models was built to overpass the problem of having only a few SWC measurements (n = 24) but several predictors to include in the model. Soil texture (clay, silt, sand), organic matter (OM), topographic variables (elevation, aspect, convexity), climatic variables (precipitation) and hydrological variables (river distance, NDWI) were used as predictors. Weighted ensemble models were built using only bivariate models with adjusted-R2 > 0.5 for each SWC at different pF. The second step consisted in running plant SDMs including modeled SWC jointly with the conventional topo-climatic variable used for plant SDMs. Third, SDMs were only run using the conventional topo-climatic variables. Finally, comparing the models obtained in the second and third steps allowed assessing the additional predictive power of SWC in plant SDMs. SWC ensemble models remained very good, with

  17. Improving performance of content-based image retrieval schemes in searching for similar breast mass regions: an assessment

    International Nuclear Information System (INIS)

    Wang Xiaohui; Park, Sang Cheol; Zheng Bin

    2009-01-01

    This study aims to assess three methods commonly used in content-based image retrieval (CBIR) schemes and investigate the approaches to improve scheme performance. A reference database involving 3000 regions of interest (ROIs) was established. Among them, 400 ROIs were randomly selected to form a testing dataset. Three methods, namely mutual information, Pearson's correlation and a multi-feature-based k-nearest neighbor (KNN) algorithm, were applied to search for the 15 'the most similar' reference ROIs to each testing ROI. The clinical relevance and visual similarity of searching results were evaluated using the areas under receiver operating characteristic (ROC) curves (A Z ) and average mean square difference (MSD) of the mass boundary spiculation level ratings between testing and selected ROIs, respectively. The results showed that the A Z values were 0.893 ± 0.009, 0.606 ± 0.021 and 0.699 ± 0.026 for the use of KNN, mutual information and Pearson's correlation, respectively. The A Z values increased to 0.724 ± 0.017 and 0.787 ± 0.016 for mutual information and Pearson's correlation when using ROIs with the size adaptively adjusted based on actual mass size. The corresponding MSD values were 2.107 ± 0.718, 2.301 ± 0.733 and 2.298 ± 0.743. The study demonstrates that due to the diversity of medical images, CBIR schemes using multiple image features and mass size-based ROIs can achieve significantly improved performance.

  18. Fast business process similarity search

    NARCIS (Netherlands)

    Yan, Z.; Dijkman, R.M.; Grefen, P.W.P.J.

    2012-01-01

    Nowadays, it is common for organizations to maintain collections of hundreds or even thousands of business processes. Techniques exist to search through such a collection, for business process models that are similar to a given query model. However, those techniques compare the query model to each

  19. Similar Biophysical Abnormalities in Glomeruli and Podocytes from Two Distinct Models.

    Science.gov (United States)

    Embry, Addie E; Liu, Zhenan; Henderson, Joel M; Byfield, F Jefferson; Liu, Liping; Yoon, Joonho; Wu, Zhenzhen; Cruz, Katrina; Moradi, Sara; Gillombardo, C Barton; Hussain, Rihanna Z; Doelger, Richard; Stuve, Olaf; Chang, Audrey N; Janmey, Paul A; Bruggeman, Leslie A; Miller, R Tyler

    2018-03-23

    Background FSGS is a pattern of podocyte injury that leads to loss of glomerular function. Podocytes support other podocytes and glomerular capillary structure, oppose hemodynamic forces, form the slit diaphragm, and have mechanical properties that permit these functions. However, the biophysical characteristics of glomeruli and podocytes in disease remain unclear. Methods Using microindentation, atomic force microscopy, immunofluorescence microscopy, quantitative RT-PCR, and a three-dimensional collagen gel contraction assay, we studied the biophysical and structural properties of glomeruli and podocytes in chronic (Tg26 mice [HIV protein expression]) and acute (protamine administration [cytoskeletal rearrangement]) models of podocyte injury. Results Compared with wild-type glomeruli, Tg26 glomeruli became progressively more deformable with disease progression, despite increased collagen content. Tg26 podocytes had disordered cytoskeletons, markedly abnormal focal adhesions, and weaker adhesion; they failed to respond to mechanical signals and exerted minimal traction force in three-dimensional collagen gels. Protamine treatment had similar but milder effects on glomeruli and podocytes. Conclusions Reduced structural integrity of Tg26 podocytes causes increased deformability of glomerular capillaries and limits the ability of capillaries to counter hemodynamic force, possibly leading to further podocyte injury. Loss of normal podocyte mechanical integrity could injure neighboring podocytes due to the absence of normal biophysical signals required for podocyte maintenance. The severe defects in podocyte mechanical behavior in the Tg26 model may explain why Tg26 glomeruli soften progressively, despite increased collagen deposition, and may be the basis for the rapid course of glomerular diseases associated with severe podocyte injury. In milder injury (protamine), similar processes occur but over a longer time. Copyright © 2018 by the American Society of Nephrology.

  20. Neutrosophic Refined Similarity Measure Based on Cosine Function

    Directory of Open Access Journals (Sweden)

    Said Broumi

    2014-12-01

    Full Text Available In this paper, the cosine similarity measure of neutrosophic refined (multi- sets is proposed and its properties are studied. The concept of this cosine similarity measure of neutrosophic refined sets is the extension of improved cosine similarity measure of single valued neutrosophic. Finally, using this cosine similarity measure of neutrosophic refined set, the application of medical diagnosis is presented.

  1. Measuring similarity between business process models

    NARCIS (Netherlands)

    Dongen, van B.F.; Dijkman, R.M.; Mendling, J.

    2007-01-01

    Quality aspects become increasingly important when business process modeling is used in a large-scale enterprise setting. In order to facilitate a storage without redundancy and an efficient retrieval of relevant process models in model databases it is required to develop a theoretical understanding

  2. QSAR models based on quantum topological molecular similarity.

    Science.gov (United States)

    Popelier, P L A; Smith, P J

    2006-07-01

    A new method called quantum topological molecular similarity (QTMS) was fairly recently proposed [J. Chem. Inf. Comp. Sc., 41, 2001, 764] to construct a variety of medicinal, ecological and physical organic QSAR/QSPRs. QTMS method uses quantum chemical topology (QCT) to define electronic descriptors drawn from modern ab initio wave functions of geometry-optimised molecules. It was shown that the current abundance of computing power can be utilised to inject realistic descriptors into QSAR/QSPRs. In this article we study seven datasets of medicinal interest : the dissociation constants (pK(a)) for a set of substituted imidazolines , the pK(a) of imidazoles , the ability of a set of indole derivatives to displace [(3)H] flunitrazepam from binding to bovine cortical membranes , the influenza inhibition constants for a set of benzimidazoles , the interaction constants for a set of amides and the enzyme liver alcohol dehydrogenase , the natriuretic activity of sulphonamide carbonic anhydrase inhibitors and the toxicity of a series of benzyl alcohols. A partial least square analysis in conjunction with a genetic algorithm delivered excellent models. They are also able to highlight the active site, of the ligand or the molecule whose structure determines the activity. The advantages and limitations of QTMS are discussed.

  3. Bilateral Trade Flows and Income Distribution Similarity

    Science.gov (United States)

    2016-01-01

    Current models of bilateral trade neglect the effects of income distribution. This paper addresses the issue by accounting for non-homothetic consumer preferences and hence investigating the role of income distribution in the context of the gravity model of trade. A theoretically justified gravity model is estimated for disaggregated trade data (Dollar volume is used as dependent variable) using a sample of 104 exporters and 108 importers for 1980–2003 to achieve two main goals. We define and calculate new measures of income distribution similarity and empirically confirm that greater similarity of income distribution between countries implies more trade. Using distribution-based measures as a proxy for demand similarities in gravity models, we find consistent and robust support for the hypothesis that countries with more similar income-distributions trade more with each other. The hypothesis is also confirmed at disaggregated level for differentiated product categories. PMID:27137462

  4. Visual reconciliation of alternative similarity spaces in climate modeling

    Science.gov (United States)

    J Poco; A Dasgupta; Y Wei; William Hargrove; C.R. Schwalm; D.N. Huntzinger; R Cook; E Bertini; C.T. Silva

    2015-01-01

    Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses...

  5. Inspiration or deflation? Feeling similar or dissimilar to slim and plus-size models affects self-evaluation of restrained eaters.

    Science.gov (United States)

    Papies, Esther K; Nicolaije, Kim A H

    2012-01-01

    The present studies examined the effect of perceiving images of slim and plus-size models on restrained eaters' self-evaluation. While previous research has found that such images can lead to either inspiration or deflation, we argue that these inconsistencies can be explained by differences in perceived similarity with the presented model. The results of two studies (ns=52 and 99) confirmed this and revealed that restrained eaters with high (low) perceived similarity to the model showed more positive (negative) self-evaluations when they viewed a slim model, compared to a plus-size model. In addition, Study 2 showed that inducing in participants a similarities mindset led to more positive self-evaluations after viewing a slim compared to a plus-size model, but only among restrained eaters with a relatively high BMI. These results are discussed in the context of research on social comparison processes and with regard to interventions for protection against the possible detrimental effects of media images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. A touch-probe path generation method through similarity analysis between the feature vectors in new and old models

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Hye Sung; Lee, Jin Won; Yang, Jeong Sam [Dept. of Industrial Engineering, Ajou University, Suwon (Korea, Republic of)

    2016-10-15

    The On-machine measurement (OMM), which measures a work piece during or after the machining process in the machining center, has the advantage of measuring the work piece directly within the work space without moving it. However, the path generation procedure used to determine the measuring sequence and variables for the complex features of a target work piece has the limitation of requiring time-consuming tasks to generate the measuring points and mostly relies on the proficiency of the on-site engineer. In this study, we propose a touch-probe path generation method using similarity analysis between the feature vectors of three-dimensional (3-D) shapes for the OMM. For the similarity analysis between a new 3-D model and existing 3-D models, we extracted the feature vectors from models that can describe the characteristics of a geometric shape model; then, we applied those feature vectors to a geometric histogram that displays a probability distribution obtained by the similarity analysis algorithm. In addition, we developed a computer-aided inspection planning system that corrects non-applied measuring points that are caused by minute geometry differences between the two models and generates the final touch-probe path.

  7. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    Science.gov (United States)

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents

  8. Improvements of evaporation drag model

    International Nuclear Information System (INIS)

    Li Xiaoyan; Yang Yanhua; Xu Jijun

    2004-01-01

    A special observable experiment facility has been established, and a series of experiments have been carried out on this facility by pouring one or several high-temperature particles into a water pool. The experiment has verified the evaporation drag model, which believe the non-symmetric profile of the local evaporation rate and the local density of the vapor would bring about a resultant force on the hot particle so as to resist its motion. However, in Yang's evaporation drag model, radiation heat transfer is taken as the only way to transfer heat from hot particle to the vapor-liquid interface and all of the radiation energy is deposited on the vapor-liquid interface, thus contributing to the vaporization rate and mass balance of the vapor film. So, the heat conduction and the heat convection are taken into account in improved model. At the same time, the improved model given by this paper presented calculations of the effect of hot particles temperature on the radiation absorption behavior of water

  9. Explicit Modeling of Ancestry Improves Polygenic Risk Scores and BLUP Prediction.

    Science.gov (United States)

    Chen, Chia-Yen; Han, Jiali; Hunter, David J; Kraft, Peter; Price, Alkes L

    2015-09-01

    Polygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color (HC), tanning ability (TA), and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRSs) and best linear unbiased prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R(2) for HC increased by 66% (0.0456-0.0755; P ancestry, which prevents ancestry effects from entering into each SNP effect and being overweighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction. © 2015 WILEY PERIODICALS, INC.

  10. A solvable self-similar model of the sausage instability in a resistive Z pinch

    International Nuclear Information System (INIS)

    Lampe, M.

    1991-01-01

    A solvable model is developed for the linearized sausage mode within the context of resistive magnetohydrodynamics. The model is based on the assumption that the fluid motion of the plasma is self-similar, as well as several assumptions pertinent to the limit of wavelength long compared to the pinch radius. The perturbations to the magnetic field are not assumed to be self-similar, but rather are calculated. Effects arising from time dependences of the z-independent perturbed state, e.g., current rising as t α , Ohmic heating, and time variation of the pinch radius, are included in the analysis. The formalism appears to provide a good representation of ''global'' modes that involve coherent sausage distortion of the entire cross section of the pinch, but excludes modes that are localized radially, and higher radial eigenmodes. For this and other reasons, it is expected that the model underestimates the maximum instability growth rates, but is reasonable for global sausage modes. The net effect of resistivity and time variation of the unperturbed state is to decrease the growth rate if α approx-lt 1, but never by more than a factor of about 2. The effect is to increase the growth rate if α approx-gt 1

  11. Personalized Mortality Prediction Driven by Electronic Medical Data and a Patient Similarity Metric

    Science.gov (United States)

    Lee, Joon; Maslove, David M.; Dubin, Joel A.

    2015-01-01

    Background Clinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1) to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2) to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made. Methods and Findings We deployed a cosine-similarity-based patient similarity metric (PSM) to an intensive care unit (ICU) database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care. Conclusions The present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR) systems, our

  12. QUALITY IMPROVEMENT MODEL AT THE MANUFACTURING PROCESS PREPARATION LEVEL

    Directory of Open Access Journals (Sweden)

    Dusko Pavletic

    2009-12-01

    Full Text Available The paper expresses base for an operational quality improvement model at the manufacturing process preparation level. A numerous appropriate related quality assurance and improvement methods and tools are identified. Main manufacturing process principles are investigated in order to scrutinize one general model of manufacturing process and to define a manufacturing process preparation level. Development and introduction of the operational quality improvement model is based on a research conducted and results of methods and tools application possibilities in real manufacturing processes shipbuilding and automotive industry. Basic model structure is described and presented by appropriate general algorithm. Operational quality improvement model developed lays down main guidelines for practical and systematic application of quality improvements methods and tools.

  13. From epidemics to information propagation: Striking differences in structurally similar adaptive network models

    Science.gov (United States)

    Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet

    2015-09-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.

  14. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship

  15. An improved model for whole genome phylogenetic analysis by Fourier transform.

    Science.gov (United States)

    Yin, Changchuan; Yau, Stephen S-T

    2015-10-07

    DNA sequence similarity comparison is one of the major steps in computational phylogenetic studies. The sequence comparison of closely related DNA sequences and genomes is usually performed by multiple sequence alignments (MSA). While the MSA method is accurate for some types of sequences, it may produce incorrect results when DNA sequences undergone rearrangements as in many bacterial and viral genomes. It is also limited by its computational complexity for comparing large volumes of data. Previously, we proposed an alignment-free method that exploits the full information contents of DNA sequences by Discrete Fourier Transform (DFT), but still with some limitations. Here, we present a significantly improved method for the similarity comparison of DNA sequences by DFT. In this method, we map DNA sequences into 2-dimensional (2D) numerical sequences and then apply DFT to transform the 2D numerical sequences into frequency domain. In the 2D mapping, the nucleotide composition of a DNA sequence is a determinant factor and the 2D mapping reduces the nucleotide composition bias in distance measure, and thus improving the similarity measure of DNA sequences. To compare the DFT power spectra of DNA sequences with different lengths, we propose an improved even scaling algorithm to extend shorter DFT power spectra to the longest length of the underlying sequences. After the DFT power spectra are evenly scaled, the spectra are in the same dimensionality of the Fourier frequency space, then the Euclidean distances of full Fourier power spectra of the DNA sequences are used as the dissimilarity metrics. The improved DFT method, with increased computational performance by 2D numerical representation, can be applicable to any DNA sequences of different length ranges. We assess the accuracy of the improved DFT similarity measure in hierarchical clustering of different DNA sequences including simulated and real datasets. The method yields accurate and reliable phylogenetic trees

  16. Model improvements to simulate charging in SEM

    Science.gov (United States)

    Arat, K. T.; Klimpel, T.; Hagen, C. W.

    2018-03-01

    Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.

  17. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    Science.gov (United States)

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  18. Wrong, but useful: regional species distribution models may not be improved by range-wide data under biased sampling.

    Science.gov (United States)

    El-Gabbas, Ahmed; Dormann, Carsten F

    2018-02-01

    Species distribution modeling (SDM) is an essential method in ecology and conservation. SDMs are often calibrated within one country's borders, typically along a limited environmental gradient with biased and incomplete data, making the quality of these models questionable. In this study, we evaluated how adequate are national presence-only data for calibrating regional SDMs. We trained SDMs for Egyptian bat species at two different scales: only within Egypt and at a species-specific global extent. We used two modeling algorithms: Maxent and elastic net, both under the point-process modeling framework. For each modeling algorithm, we measured the congruence of the predictions of global and regional models for Egypt, assuming that the lower the congruence, the lower the appropriateness of the Egyptian dataset to describe the species' niche. We inspected the effect of incorporating predictions from global models as additional predictor ("prior") to regional models, and quantified the improvement in terms of AUC and the congruence between regional models run with and without priors. Moreover, we analyzed predictive performance improvements after correction for sampling bias at both scales. On average, predictions from global and regional models in Egypt only weakly concur. Collectively, the use of priors did not lead to much improvement: similar AUC and high congruence between regional models calibrated with and without priors. Correction for sampling bias led to higher model performance, whatever prior used, making the use of priors less pronounced. Under biased and incomplete sampling, the use of global bats data did not improve regional model performance. Without enough bias-free regional data, we cannot objectively identify the actual improvement of regional models after incorporating information from the global niche. However, we still believe in great potential for global model predictions to guide future surveys and improve regional sampling in data

  19. Feasibility of similarity coefficient map for improving morphological evaluation of T2* weighted MRI for renal cancer

    International Nuclear Information System (INIS)

    Wang Hao-Yu; Bao Shang-Lian; Jiani Hu; Meng Li; Haacke, E. M.; Xie Yao-Qin; Chen Jie; Amy Yu; Wei Xin-Hua; Dai Yong-Ming

    2013-01-01

    The purpose of this paper is to investigate the feasibility of using a similarity coefficient map (SCM) in improving the morphological evaluation of T 2 * weighted (T 2 *W) magnatic resonance imaging (MRI) for renal cancer. Simulation studies and in vivo 12-echo T 2 *W experiments for renal cancers were performed for this purpose. The results of the first simulation study suggest that an SCM can reveal small structures which are hard to distinguish from the background tissue in T 2 *W images and the corresponding T 2 * map. The capability of improving the morphological evaluation is likely due to the improvement in the signal-to-noise ratio (SNR) and the carrier-to-noise ratio (CNR) by using the SCM technique. Compared with T 2 *W images, an SCM can improve the SNR by a factor ranging from 1.87 to 2.47. Compared with T 2 * maps, an SCM can improve the SNR by a factor ranging from 3.85 to 33.31. Compared with T 2 *W images, an SCM can improve the CNR by a factor ranging from 2.09 to 2.43. Compared with T 2 * maps, an SCM can improve the CNR by a factor ranging from 1.94 to 8.14. For a given noise level, the improvements of the SNR and the CNR depend mainly on the original SNRs and CNRs in T 2 *W images, respectively. In vivo experiments confirmed the results of the first simulation study. The results of the second simulation study suggest that more echoes are used to generate the SCM, and higher SNRs and CNRs can be achieved in SCMs. In conclusion, an SCM can provide improved morphological evaluation of T 2 *W MR images for renal cancer by unveiling fine structures which are ambiguous or invisible in the corresponding T 2 *W MR images and T 2 * maps. Furthermore, in practical applications, for a fixed total sampling time, one should increase the number of echoes as much as possible to achieve SCMs with better SNRs and CNRs

  20. Process correlation analysis model for process improvement identification.

    Science.gov (United States)

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  1. Soil hydraulic properties near saturation, an improved conductivity model

    DEFF Research Database (Denmark)

    Børgesen, Christen Duus; Jacobsen, Ole Hørbye; Hansen, Søren

    2006-01-01

    of commonly used hydraulic conductivity models and give suggestions for improved models. Water retention and near saturated and saturated hydraulic conductivity were measured for a variety of 81 top and subsoils. The hydraulic conductivity models by van Genuchten [van Genuchten, 1980. A closed-form equation...... for predicting the hydraulic conductivity of unsaturated soils. Soil Sci. Soc. Am. J. 44, 892–898.] (vGM) and Brooks and Corey, modified by Jarvis [Jarvis, 1991. MACRO—A Model of Water Movement and Solute Transport in Macroporous Soils. Swedish University of Agricultural Sciences. Department of Soil Sciences....... Optimising a matching factor (k0) improved the fit considerably whereas optimising the l-parameter in the vGM model improved the fit only slightly. The vGM was improved with an empirical scaling function to account for the rapid increase in conductivity near saturation. Using the improved models...

  2. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2018-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  3. Renewing the Respect for Similarity

    Directory of Open Access Journals (Sweden)

    Shimon eEdelman

    2012-07-01

    Full Text Available In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemmingfrom its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problemat hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, bysurveying established results and new developments in the theory and methods of similarity-preservingassociative lookup and dimensionality reduction — critical components of many cognitive functions, aswell as of intelligent data management in computer vision. We focus in particular on the growing familyof algorithms that support associative memory by performing hashing that respects local similarity, andon the uses of similarity in representing structured objects and scenes. Insofar as these similarity-basedideas and methods are useful in cognitive modeling and in AI applications, they should be included inthe core conceptual toolkit of computational neuroscience.

  4. Discovering Music Structure via Similarity Fusion

    DEFF Research Database (Denmark)

    for representing music structure is studied in a simplified scenario consisting of 4412 songs and two similarity measures among them. The results suggest that the PLSA model is a useful framework to combine different sources of information, and provides a reasonable space for song representation.......Automatic methods for music navigation and music recommendation exploit the structure in the music to carry out a meaningful exploration of the “song space”. To get a satisfactory performance from such systems, one should incorporate as much information about songs similarity as possible; however...... semantics”, in such a way that all observed similarities can be satisfactorily explained using the latent semantics. Therefore, one can think of these semantics as the real structure in music, in the sense that they can explain the observed similarities among songs. The suitability of the PLSA model...

  5. Discovering Music Structure via Similarity Fusion

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Parrado-Hernandez, Emilio; Meng, Anders

    Automatic methods for music navigation and music recommendation exploit the structure in the music to carry out a meaningful exploration of the “song space”. To get a satisfactory performance from such systems, one should incorporate as much information about songs similarity as possible; however...... semantics”, in such a way that all observed similarities can be satisfactorily explained using the latent semantics. Therefore, one can think of these semantics as the real structure in music, in the sense that they can explain the observed similarities among songs. The suitability of the PLSA model...... for representing music structure is studied in a simplified scenario consisting of 4412 songs and two similarity measures among them. The results suggest that the PLSA model is a useful framework to combine different sources of information, and provides a reasonable space for song representation....

  6. Study on Software Quality Improvement based on Rayleigh Model and PDCA Model

    OpenAIRE

    Ning Jingfeng; Hu Ming

    2013-01-01

    As the software industry gradually becomes mature, software quality is regarded as the life of a software enterprise. This article discusses how to improve the quality of software, applies Rayleigh model and PDCA model to the software quality management, combines with the defect removal effectiveness index, exerts PDCA model to solve the problem of quality management objectives when using the Rayleigh model in bidirectional quality improvement strategies of software quality management, a...

  7. An improved anisotropy-resolving subgrid-scale model for flows in laminar–turbulent transition region

    International Nuclear Information System (INIS)

    Inagaki, Masahide; Abe, Ken-ichi

    2017-01-01

    Highlights: • An anisotropy-resolving subgrid-scale model, covering a wide range of grid resolutions, is improved. • The new model enhances its applicability to flows in the laminar-turbulent transition region. • A mixed-timescale subgrid-scale model is used as the eddy viscosity model. • The proposed model successfully predicts the channel flows at transitional Reynolds numbers. • The influence of the definition of the grid-filter width is also investigated. - Abstract: Some types of mixed subgrid-scale (SGS) models combining an isotropic eddy-viscosity model and a scale-similarity model can be used to effectively improve the accuracy of large eddy simulation (LES) in predicting wall turbulence. Abe (2013) has recently proposed a stabilized mixed model that maintains its computational stability through a unique procedure that prevents the energy transfer between the grid-scale (GS) and SGS components induced by the scale-similarity term. At the same time, since this model can successfully predict the anisotropy of the SGS stress, the predictive performance, particularly at coarse grid resolutions, is remarkably improved in comparison with other mixed models. However, since the stabilized anisotropy-resolving SGS model includes a transport equation of the SGS turbulence energy, k SGS , containing a production term proportional to the square root of k SGS , its applicability to flows with both laminar and turbulent regions is not so high. This is because such a production term causes k SGS to self-reproduce. Consequently, the laminar–turbulent transition region predicted by this model depends on the inflow or initial condition of k SGS . To resolve these issues, in the present study, the mixed-timescale (MTS) SGS model proposed by Inagaki et al. (2005) is introduced into the stabilized mixed model as the isotropic eddy-viscosity part and the production term in the k SGS transport equation. In the MTS model, the SGS turbulence energy, k es , estimated by

  8. The perfectionism model of binge eating: testing unique contributions, mediating mechanisms, and cross-cultural similarities using a daily diary methodology.

    Science.gov (United States)

    Sherry, Simon B; Sabourin, Brigitte C; Hall, Peter A; Hewitt, Paul L; Flett, Gordon L; Gralnick, Tara M

    2014-12-01

    The perfectionism model of binge eating (PMOBE) is an integrative model explaining the link between perfectionism and binge eating. This model proposes socially prescribed perfectionism confers risk for binge eating by generating exposure to 4 putative binge triggers: interpersonal discrepancies, low interpersonal esteem, depressive affect, and dietary restraint. The present study addresses important gaps in knowledge by testing if these 4 binge triggers uniquely predict changes in binge eating on a daily basis and if daily variations in each binge trigger mediate the link between socially prescribed perfectionism and daily binge eating. Analyses also tested if proposed mediational models generalized across Asian and European Canadians. The PMOBE was tested in 566 undergraduate women using a 7-day daily diary methodology. Depressive affect predicted binge eating, whereas anxious affect did not. Each binge trigger uniquely contributed to binge eating on a daily basis. All binge triggers except for dietary restraint mediated the relationship between socially prescribed perfectionism and change in daily binge eating. Results suggested cross-cultural similarities, with the PMOBE applying to both Asian and European Canadian women. The present study advances understanding of the personality traits and the contextual conditions accompanying binge eating and provides an important step toward improving treatments for people suffering from eating binges and associated negative consequences.

  9. Two-Dimensional Magnetotelluric Modelling of Ore Deposits: Improvements in Model Constraints by Inclusion of Borehole Measurements

    Science.gov (United States)

    Kalscheuer, Thomas; Juhojuntti, Niklas; Vaittinen, Katri

    2017-12-01

    functions is used as the initial model for the inversion of the surface impedances, skin-effect transfer functions and vertical magnetic and electric transfer functions. For both synthetic examples, the inversion models resulting from surface and borehole measurements have higher similarity to the true models than models computed exclusively from surface measurements. However, the most prominent improvements were obtained for the first example, in which a deep small-sized ore body is more easily distinguished from a shallow main ore body penetrated by a borehole and the extent of the shadow zone (a conductive artefact) underneath the main conductor is strongly reduced. Formal model error and resolution analysis demonstrated that predominantly the skin-effect transfer functions improve model resolution at depth below the sensors and at distance of ˜ 300-1000 m laterally off a borehole, whereas the vertical electric and magnetic transfer functions improve resolution along the borehole and in its immediate vicinity. Furthermore, we studied the signal levels at depth and provided specifications of borehole magnetic and electric field sensors to be developed in a future project. Our results suggest that three-component SQUID and fluxgate magnetometers should be developed to facilitate borehole MT measurements at signal frequencies above and below 1 Hz, respectively.

  10. Investigation of psychophysical similarity measures for selection of similar images in the diagnosis of clustered microcalcifications on mammograms

    International Nuclear Information System (INIS)

    Muramatsu, Chisako; Li Qiang; Schmidt, Robert; Shiraishi, Junji; Doi, Kunio

    2008-01-01

    The presentation of images with lesions of known pathology that are similar to an unknown lesion may be helpful to radiologists in the diagnosis of challenging cases for improving the diagnostic accuracy and also for reducing variation among different radiologists. The authors have been developing a computerized scheme for automatically selecting similar images with clustered microcalcifications on mammograms from a large database. For similar images to be useful, they must be similar from the point of view of the diagnosing radiologists. In order to select such images, subjective similarity ratings were obtained for a number of pairs of clustered microcalcifications by breast radiologists for establishment of a ''gold standard'' of image similarity, and the gold standard was employed for determination and evaluation of the selection of similar images. The images used in this study were obtained from the Digital Database for Screening Mammography developed by the University of South Florida. The subjective similarity ratings for 300 pairs of images with clustered microcalcifications were determined by ten breast radiologists. The authors determined a number of image features which represent the characteristics of clustered microcalcifications that radiologists would use in their diagnosis. For determination of objective similarity measures, an artificial neural network (ANN) was employed. The ANN was trained with the average subjective similarity ratings as teacher and selected image features as input data. The ANN was trained to learn the relationship between the image features and the radiologists' similarity ratings; therefore, once the training was completed, the ANN was able to determine the similarity, called a psychophysical similarity measure, which was expected to be close to radiologists' impressions, for an unknown pair of clustered microcalcifications. By use of a leave-one-out test method, the best combination of features was selected. The correlation

  11. Investigation of psychophysical similarity measures for selection of similar images in the diagnosis of clustered microcalcifications on mammograms

    Energy Technology Data Exchange (ETDEWEB)

    Muramatsu, Chisako; Li Qiang; Schmidt, Robert; Shiraishi, Junji; Doi, Kunio [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States) and Department of Intelligent Image Information, Gifu University, 1-1 Yanagido, Gifu (Japan); Department of Radiology, Duke Advanced Imaging Labs, Duke University, 2424 Erwin Road, Suite 302, Durham, North Carolina 27705 (United States); Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2008-12-15

    The presentation of images with lesions of known pathology that are similar to an unknown lesion may be helpful to radiologists in the diagnosis of challenging cases for improving the diagnostic accuracy and also for reducing variation among different radiologists. The authors have been developing a computerized scheme for automatically selecting similar images with clustered microcalcifications on mammograms from a large database. For similar images to be useful, they must be similar from the point of view of the diagnosing radiologists. In order to select such images, subjective similarity ratings were obtained for a number of pairs of clustered microcalcifications by breast radiologists for establishment of a ''gold standard'' of image similarity, and the gold standard was employed for determination and evaluation of the selection of similar images. The images used in this study were obtained from the Digital Database for Screening Mammography developed by the University of South Florida. The subjective similarity ratings for 300 pairs of images with clustered microcalcifications were determined by ten breast radiologists. The authors determined a number of image features which represent the characteristics of clustered microcalcifications that radiologists would use in their diagnosis. For determination of objective similarity measures, an artificial neural network (ANN) was employed. The ANN was trained with the average subjective similarity ratings as teacher and selected image features as input data. The ANN was trained to learn the relationship between the image features and the radiologists' similarity ratings; therefore, once the training was completed, the ANN was able to determine the similarity, called a psychophysical similarity measure, which was expected to be close to radiologists' impressions, for an unknown pair of clustered microcalcifications. By use of a leave-one-out test method, the best combination of features

  12. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  13. Self-similarities of periodic structures for a discrete model of a two-gene system

    International Nuclear Information System (INIS)

    Souza, S.L.T. de; Lima, A.A.; Caldas, I.L.; Medrano-T, R.O.; Guimarães-Filho, Z.O.

    2012-01-01

    We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.

  14. Self-similarities of periodic structures for a discrete model of a two-gene system

    Energy Technology Data Exchange (ETDEWEB)

    Souza, S.L.T. de, E-mail: thomaz@ufsj.edu.br [Departamento de Física e Matemática, Universidade Federal de São João del-Rei, Ouro Branco, MG (Brazil); Lima, A.A. [Escola de Farmácia, Universidade Federal de Ouro Preto, Ouro Preto, MG (Brazil); Caldas, I.L. [Instituto de Física, Universidade de São Paulo, São Paulo, SP (Brazil); Medrano-T, R.O. [Departamento de Ciências Exatas e da Terra, Universidade Federal de São Paulo, Diadema, SP (Brazil); Guimarães-Filho, Z.O. [Aix-Marseille Univ., CNRS PIIM UMR6633, International Institute for Fusion Science, Marseille (France)

    2012-03-12

    We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.

  15. Improving access in gastroenterology: The single point of entry model for referrals

    Science.gov (United States)

    Novak, Kerri L; Van Zanten, Sander Veldhuyzen; Pendharkar, Sachin R

    2013-01-01

    In 2005, a group of academic gastroenterologists in Calgary (Alberta) adopted a centralized referral intake system known as central triage. This system provided a single point of entry model (SEM) for referrals rather than the traditional system of individual practitioners managing their own referrals and queues. The goal of central triage was to improve wait times and referral management. In 2008, a similar system was developed in Edmonton at the University of Alberta Hospital (Edmonton, Alberta). SEMs have subsequently been adopted by numerous subspecialties throughout Alberta. There are many benefits of SEMs including improved access and reduced wait times. Understanding and measuring complex patient flow systems is key to improving access, and centralized intake systems provide an opportunity to better understand total demand and system bottlenecks. This knowledge is particularly important for specialties such as gastroenterology (GI), in which demand exceeds supply. While it is anticipated that SEMs will reduce wait times for GI care in Canada, the lack of sufficient resources to meet the demand for GI care necessitates additional strategies. PMID:24040629

  16. Improving Access in Gastroenterology: The Single Point of Entry Model for Referrals

    Directory of Open Access Journals (Sweden)

    Kerri L Novak

    2013-01-01

    Full Text Available In 2005, a group of academic gastroenterologists in Calgary (Alberta adopted a centralized referral intake system known as central triage. This system provided a single point of entry model (SEM for referrals rather than the traditional system of individual practitioners managing their own referrals and queues. The goal of central triage was to improve wait times and referral management. In 2008, a similar system was developed in Edmonton at the University of Alberta Hospital (Edmonton, Alberta. SEMs have subsequently been adopted by numerous subspecialties throughout Alberta. There are many benefits of SEMs including improved access and reduced wait times. Understanding and measuring complex patient flow systems is key to improving access, and centralized intake systems provide an opportunity to better understand total demand and system bottlenecks. This knowledge is particularly important for specialties such as gastroenterology (GI, in which demand exceeds supply. While it is anticipated that SEMs will reduce wait times for GI care in Canada, the lack of sufficient resources to meet the demand for GI care necessitates additional strategies.

  17. Improving access in gastroenterology: the single point of entry model for referrals.

    Science.gov (United States)

    Novak, Kerri; Veldhuyzen Van Zanten, Sander; Pendharkar, Sachin R

    2013-11-01

    In 2005, a group of academic gastroenterologists in Calgary (Alberta) adopted a centralized referral intake system known as central triage. This system provided a single point of entry model (SEM) for referrals rather than the traditional system of individual practitioners managing their own referrals and queues. The goal of central triage was to improve wait times and referral management. In 2008, a similar system was developed in Edmonton at the University of Alberta Hospital (Edmonton, Alberta). SEMs have subsequently been adopted by numerous subspecialties throughout Alberta. There are many benefits of SEMs including improved access and reduced wait times. Understanding and measuring complex patient flow systems is key to improving access, and centralized intake systems provide an opportunity to better understand total demand and system bottlenecks. This knowledge is particularly important for specialties such as gastroenterology (GI), in which demand exceeds supply. While it is anticipated that SEMs will reduce wait times for GI care in Canada, the lack of sufficient resources to meet the demand for GI care necessitates additional strategies.

  18. A Continuous Improvement Capital Funding Model.

    Science.gov (United States)

    Adams, Matt

    2001-01-01

    Describes a capital funding model that helps assess facility renewal needs in a way that minimizes resources while maximizing results. The article explains the sub-components of a continuous improvement capital funding model, including budgeting processes for finish renewal, building performance renewal, and critical outcome. (GR)

  19. Common neighbour structure and similarity intensity in complex networks

    Science.gov (United States)

    Hou, Lei; Liu, Kecheng

    2017-10-01

    Complex systems as networks always exhibit strong regularities, implying underlying mechanisms governing their evolution. In addition to the degree preference, the similarity has been argued to be another driver for networks. Assuming a network is randomly organised without similarity preference, the present paper studies the expected number of common neighbours between vertices. A symmetrical similarity index is accordingly developed by removing such expected number from the observed common neighbours. The developed index can not only describe the similarities between vertices, but also the dissimilarities. We further apply the proposed index to measure of the influence of similarity on the wring patterns of networks. Fifteen empirical networks as well as artificial networks are examined in terms of similarity intensity and degree heterogeneity. Results on real networks indicate that, social networks are strongly governed by the similarity as well as the degree preference, while the biological networks and infrastructure networks show no apparent similarity governance. Particularly, classical network models, such as the Barabási-Albert model, the Erdös-Rényi model and the Ring Lattice, cannot well describe the social networks in terms of the degree heterogeneity and similarity intensity. The findings may shed some light on the modelling and link prediction of different classes of networks.

  20. Similarity and self-similarity in high energy density physics: application to laboratory astrophysics

    International Nuclear Information System (INIS)

    Falize, E.

    2008-10-01

    The spectacular recent development of powerful facilities allows the astrophysical community to explore, in laboratory, astrophysical phenomena where radiation and matter are strongly coupled. The titles of the nine chapters of the thesis are: from high energy density physics to laboratory astrophysics; Lie groups, invariance and self-similarity; scaling laws and similarity properties in High-Energy-Density physics; the Burgan-Feix-Munier transformation; dynamics of polytropic gases; stationary radiating shocks and the POLAR project; structure, dynamics and stability of optically thin fluids; from young star jets to laboratory jets; modelling and experiences for laboratory jets

  1. Can better modelling improve tokamak control?

    International Nuclear Information System (INIS)

    Lister, J.B.; Vyas, P.; Ward, D.J.; Albanese, R.; Ambrosino, G.; Ariola, M.; Villone, F.; Coutlis, A.; Limebeer, D.J.N.; Wainwright, J.P.

    1997-01-01

    The control of present day tokamaks usually relies upon primitive modelling and TCV is used to illustrate this. A counter example is provided by the successful implementation of high order SISO controllers on COMPASS-D. Suitable models of tokamaks are required to exploit the potential of modern control techniques. A physics based MIMO model of TCV is presented and validated with experimental closed loop responses. A system identified open loop model is also presented. An enhanced controller based on these models is designed and the performance improvements discussed. (author) 5 figs., 9 refs

  2. Modeling of similar economies

    Directory of Open Access Journals (Sweden)

    Sergey B. Kuznetsov

    2017-06-01

    Full Text Available Objective to obtain dimensionless criteria ndash economic indices characterizing the national economy and not depending on its size. Methods mathematical modeling theory of dimensions processing statistical data. Results basing on differential equations describing the national economy with the account of economical environment resistance two dimensionless criteria are obtained which allow to compare economies regardless of their sizes. With the theory of dimensions we show that the obtained indices are not accidental. We demonstrate the implementation of the obtained dimensionless criteria for the analysis of behavior of certain countriesrsquo economies. Scientific novelty the dimensionless criteria are obtained ndash economic indices which allow to compare economies regardless of their sizes and to analyze the dynamic changes in the economies with time. nbsp Practical significance the obtained results can be used for dynamic and comparative analysis of different countriesrsquo economies regardless of their sizes.

  3. Propriedades termofísicas de soluções modelo similares a sucos - Parte I Thermophysical properties of model solutions similar to juice - Part I

    Directory of Open Access Journals (Sweden)

    Silvia Cristina Sobottka Rolim de Moura

    2003-04-01

    Full Text Available Propriedades termofísicas, difusividade térmica e calor específico, de soluções modelo similares a sucos, foram determinadas experimentalmente e ajustadas a modelos matemáticos (STATISTICA 6.0, em função da sua composição química. Para definição das soluções modelo foi realizado um planejamento estrela mantendo-se fixa a quantidade de ácido (1,5% e variando-se a água (82-98,5%, o carboidrato (0-15% e a gordura (0-1,5%. A determinação do calor específico foi realizada através do método de Hwang & Hayakawa e a difusividade térmica com base no método de Dickerson. Os resultados de cada propriedade foram analisados através de superfícies de respostas. Foram encontrados resultados significativos para as propriedades, mostrando que os modelos encontrados representam significativamente as mudanças das propriedades térmicas dos sucos, com alterações na composição e na temperatura.Thermophysical properties, thermal diffusivity and specific heat of model solutions similar to juices were experimentally determined and the values obtained compared to those predicted by mathematical models (STATISTIC 6.0 and to values mentioned in the literature, according to the chemical composition. It was adopted a star planning to define the composition of the model solutions fixing the acid amount in 1.5% and varying water (82-98.5%, carboydrate (0-15% and fat (0-1.5%. The specific heat was determined by Hwang & Hayakawa method and the thermal diffusivity was determined by Dickerson method. The results of each property were analysed by the response surface method. The results were significative, indicating that the models represented considerably the changes of thermal properties of juices according to their composition and temperature variations.

  4. Improved Algorithm of SCS-CN Model Parameters in Typical Inland River Basin in Central Asia

    Science.gov (United States)

    Wang, Jin J.; Ding, Jian L.; Zhang, Zhe; Chen, Wen Q.

    2017-02-01

    Rainfall-runoff relationship is the most important factor for hydrological structures, social and economic development on the background of global warmer, especially in arid regions. The aim of this paper is find the suitable method to simulate the runoff in arid area. The Soil Conservation Service Curve Number (SCS-CN) is the most popular and widely applied model for direct runoff estimation. In this paper, we will focus on Wen-quan Basin in source regions of Boertala River. It is a typical valley of inland in Central Asia. First time to use the 16m resolution remote sensing image about high-definition earth observation satellite “Gaofen-1” to provide a high degree accuracy data for land use classification determine the curve number. Use surface temperature/vegetation index (TS/VI) construct 2D scatter plot combine with the soil moisture absorption balance principle calculate the moisture-holding capacity of soil. Using original and parameter algorithm improved SCS-CN model respectively to simulation the runoff. The simulation results show that the improved model is better than original model. Both of them in calibration and validation periods Nash-Sutcliffe efficiency were 0.79, 0.71 and 0.66,038. And relative error were3%, 12% and 17%, 27%. It shows that the simulation accuracy should be further improved and using remote sensing information technology to improve the basic geographic data for the hydrological model has the following advantages: 1) Remote sensing data having a planar characteristic, comprehensive and representative. 2) To get around the bottleneck about lack of data, provide reference to simulation the runoff in similar basin conditions and data-lacking regions.

  5. Propriedades termofísicas de soluções-modelo similares a sucos: parte II Thermophysical properties of model solutions similar to juice: part II

    Directory of Open Access Journals (Sweden)

    Sílvia Cristina Sobottka Rolim de Moura

    2005-09-01

    Full Text Available Propriedades termofísicas, densidade e viscosidade de soluções-modelo similares a sucos foram determinadas experimentalmente. Os resultados foram comparados aos preditos por modelos matemáticos (STATISTICA 6.0 e obtidos da literatura em função da sua composição química. Para definição das soluções-modelo, foi realizado um planejamento estrela, mantendo-se fixa a quanti-dade de ácido (1,5% e variando-se a água (82-98,5%, o carboidrato (0-15% e a gordura (0-1,5%. A densidade foi determinada em picnômetro. A viscosidade foi determinada em viscosímetro Brookfield modelo LVF. A condutividade térmica foi calculada com o conhecimento das propriedades difusividade térmica e calor específico (apresentados na Parte I deste trabalho MOURA [7] e da densidade. Os resultados de cada propriedade foram analisados através de superfícies de respostas. Foram encontrados resultados significativos para as propriedades, mostrando que os modelos encontrados representam as mudanças das propriedades térmicas e físicas dos sucos, com alterações na composição e na temperatura.Thermophysical properties, density and viscosity of model solutions similar to juices were experimentally determined. The results were compared to those predicted by mathematical models (STATISTIC 6.0 and to values mentioned in the literature, according to the chemical composition. A star planning was adopted to define model solutions composition; fixing the acid amount in 1.5% and varying water (82-98.5%, carbohydrate (0-15% and fat (0-1.5%. The density was determined by picnometer. The viscosity was determined by Brookfield LVF model viscosimeter. The thermal conductivity was calculated based on thermal diffusivity and specific heat values (presented at the 1st . Part of this paper - MOURA [7] and density. The results of each property were analyzed by the response surface method. The found results were significant, indicating that the models represent the changes of

  6. Environmental niche models for riverine desert fishes and their similarity according to phylogeny and functionality

    Science.gov (United States)

    Whitney, James E.; Whittier, Joanna B.; Paukert, Craig P.

    2017-01-01

    Environmental filtering and competitive exclusion are hypotheses frequently invoked in explaining species' environmental niches (i.e., geographic distributions). A key assumption in both hypotheses is that the functional niche (i.e., species traits) governs the environmental niche, but few studies have rigorously evaluated this assumption. Furthermore, phylogeny could be associated with these hypotheses if it is predictive of functional niche similarity via phylogenetic signal or convergent evolution, or of environmental niche similarity through phylogenetic attraction or repulsion. The objectives of this study were to investigate relationships between environmental niches, functional niches, and phylogenies of fishes of the Upper (UCRB) and Lower (LCRB) Colorado River Basins of southwestern North America. We predicted that functionally similar species would have similar environmental niches (i.e., environmental filtering) and that closely related species would be functionally similar (i.e., phylogenetic signal) and possess similar environmental niches (i.e., phylogenetic attraction). Environmental niches were quantified using environmental niche modeling, and functional similarity was determined using functional trait data. Nonnatives in the UCRB provided the only support for environmental filtering, which resulted from several warmwater nonnatives having dam number as a common predictor of their distributions, whereas several cool- and coldwater nonnatives shared mean annual air temperature as an important distributional predictor. Phylogenetic signal was supported for both natives and nonnatives in both basins. Lastly, phylogenetic attraction was only supported for native fishes in the LCRB and for nonnative fishes in the UCRB. Our results indicated that functional similarity was heavily influenced by evolutionary history, but that phylogenetic relationships and functional traits may not always predict the environmental distribution of species. However, the

  7. Protein structural similarity search by Ramachandran codes

    Directory of Open Access Journals (Sweden)

    Chang Chih-Hung

    2007-08-01

    Full Text Available Abstract Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation. SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era.

  8. An improved gravity model for Mars: Goddard Mars Model-1 (GMM-1)

    Science.gov (United States)

    Smith, D. E.; Lerch, F. J.; Nerem, R. S.; Zuber, M. T.; Patel, G. B.; Fricke, S. K.; Lemoine, F. G.

    1993-01-01

    Doppler tracking data of three orbiting spacecraft have been reanalyzed to develop a new gravitational field model for the planet Mars, GMM-1 (Goddard Mars Model-1). This model employs nearly all available data, consisting of approximately 1100 days of S-bank tracking data collected by NASA's Deep Space Network from the Mariner 9, and Viking 1 and Viking 2 spacecraft, in seven different orbits, between 1971 and 1979. GMM-1 is complete to spherical harmonic degree and order 50, which corresponds to a half-wavelength spatial resolution of 200-300 km where the data permit. GMM-1 represents satellite orbits with considerably better accuracy than previous Mars gravity models and shows greater resolution of identifiable geological structures. The notable improvement in GMM-1 over previous models is a consequence of several factors: improved computational capabilities, the use of optimum weighting and least-squares collocation solution techniques which stabilized the behavior of the solution at high degree and order, and the use of longer satellite arcs than employed in previous solutions that were made possible by improved force and measurement models. The inclusion of X-band tracking data from the 379-km altitude, near-polar orbiting Mars Observer spacecraft should provide a significant improvement over GMM-1, particularly at high latitudes where current data poorly resolves the gravitational signature of the planet.

  9. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  10. Improving Agent Based Modeling of Critical Incidents

    Directory of Open Access Journals (Sweden)

    Robert Till

    2010-04-01

    Full Text Available Agent Based Modeling (ABM is a powerful method that has been used to simulate potential critical incidents in the infrastructure and built environments. This paper will discuss the modeling of some critical incidents currently simulated using ABM and how they may be expanded and improved by using better physiological modeling, psychological modeling, modeling the actions of interveners, introducing Geographic Information Systems (GIS and open source models.

  11. INTEGRATED COST MODEL FOR IMPROVING THE PRODUCTION IN COMPANIES

    Directory of Open Access Journals (Sweden)

    Zuzana Hajduova

    2014-12-01

    Full Text Available Purpose: All processes in the company play important role in ensuring functional integrated management system. We point out the importance of need for a systematic approach to the use of quantitative, but especially statistical methods for modelling the cost of the improvement activities that are part of an integrated management system. Development of integrated management systems worldwide leads towards building of systematic procedures of implementation maintenance and improvement of all systems according to the requirements of all the sides involved.Methodology: Statistical evaluation of the economic indicators of improvement costs and the need for a systematic approach to their management in terms of integrated management systems have become a key role also in the management of processes in the company Cu Drôt, a.s. The aim of this publication is to highlight the importance of proper implementation of statistical methods in the process of improvement costs management in the integrated management system of current market conditions and document the legitimacy of a systematic approach in the area of monitoring and analysing indicators of improvement with the aim of the efficient process management of company. We provide specific example of the implementation of appropriate statistical methods in the production of copper wire in a company Cu Drôt, a.s. This publication also aims to create a model for the estimation of integrated improvement costs, which through the use of statistical methods in the company Cu Drôt, a.s. is used to support decision-making on improving efficiency.Findings: In the present publication, a method for modelling the improvement process, by an integrated manner, is proposed. It is a method in which the basic attributes of the improvement in quality, safety and environment are considered and synergistically combined in the same improvement project. The work examines the use of sophisticated quantitative, especially

  12. Vertex labeling and routing in self-similar outerplanar unclustered graphs modeling complex networks

    International Nuclear Information System (INIS)

    Comellas, Francesc; Miralles, Alicia

    2009-01-01

    This paper introduces a labeling and optimal routing algorithm for a family of modular, self-similar, small-world graphs with clustering zero. Many properties of this family are comparable to those of networks associated with technological and biological systems with low clustering, such as the power grid, some electronic circuits and protein networks. For these systems, the existence of models with an efficient routing protocol is of interest to design practical communication algorithms in relation to dynamical processes (including synchronization) and also to understand the underlying mechanisms that have shaped their particular structure.

  13. Ionosonde-based indices for improved representation of solar cycle variation in the International Reference Ionosphere model

    Science.gov (United States)

    Brown, Steven; Bilitza, Dieter; Yiǧit, Erdal

    2018-06-01

    A new monthly ionospheric index, IGNS, is presented to improve the representation of the solar cycle variation of the ionospheric F2 peak plasma frequency, foF2. IGNS is calculated using a methodology similar to the construction of the "global effective sunspot number", IG, given by Liu et al. (1983) but selects ionosonde observations based on hemispheres. We incorporated the updated index into the International Reference Ionosphere (IRI) model and compared the foF2 model predictions with global ionospheric observations. We also investigated the influence of the underlying foF2 model on the IG index. IRI has two options for foF2 specification, the CCIR-66 and URSI-88 foF2 models. For the first time, we have calculated IG using URSI-88 and assessed the impact on model predictions. Through a retrospective model-data comparison, results show that the inclusion of the new monthly IGNS index in place of the current 12-month smoothed IG index reduce the foF2 model prediction errors by nearly a factor of two. These results apply to both day-time and nightime predictions. This is due to an overall improved prediction of foF2 seasonal and solar cycle variations in the different hemispheres.

  14. Improved Formulations for Air-Surface Exchanges Related to National Security Needs: Dry Deposition Models

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, James G.

    2006-07-01

    The Department of Homeland Security and others rely on results from atmospheric dispersion models for threat evaluation, event management, and post-event analyses. The ability to simulate dry deposition rates is a crucial part of our emergency preparedness capabilities. Deposited materials pose potential hazards from radioactive shine, inhalation, and ingestion pathways. A reliable characterization of these potential exposures is critical for management and mitigation of these hazards. A review of the current status of dry deposition formulations used in these atmospheric dispersion models was conducted. The formulations for dry deposition of particulate materials from am event such as a radiological attack involving a Radiological Detonation Device (RDD) is considered. The results of this effort are applicable to current emergency preparedness capabilities such as are deployed in the Interagency Modeling and Atmospheric Assessment Center (IMAAC), other similar national/regional emergency response systems, and standalone emergency response models. The review concludes that dry deposition formulations need to consider the full range of particle sizes including: 1) the accumulation mode range (0.1 to 1 micron diameter) and its minimum in deposition velocity, 2) smaller particles (less than .01 micron diameter) deposited mainly by molecular diffusion, 3) 10 to 50 micron diameter particles deposited mainly by impaction and gravitational settling, and 4) larger particles (greater than 100 micron diameter) deposited mainly by gravitational settling. The effects of the local turbulence intensity, particle characteristics, and surface element properties must also be addressed in the formulations. Specific areas for improvements in the dry deposition formulations are 1) capability of simulating near-field dry deposition patterns, 2) capability of addressing the full range of potential particle properties, 3) incorporation of particle surface retention/rebound processes, and

  15. Similarity-based pattern analysis and recognition

    CERN Document Server

    Pelillo, Marcello

    2013-01-01

    This accessible text/reference presents a coherent overview of the emerging field of non-Euclidean similarity learning. The book presents a broad range of perspectives on similarity-based pattern analysis and recognition methods, from purely theoretical challenges to practical, real-world applications. The coverage includes both supervised and unsupervised learning paradigms, as well as generative and discriminative models. Topics and features: explores the origination and causes of non-Euclidean (dis)similarity measures, and how they influence the performance of traditional classification alg

  16. Crop Model Improvement Reduces the Uncertainty of the Response to Temperature of Multi-Model Ensembles

    Science.gov (United States)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli

    2016-01-01

    To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.

  17. Improved Dynamic Modeling of the Cascade Distillation Subsystem and Integration with Models of Other Water Recovery Subsystems

    Science.gov (United States)

    Perry, Bruce; Anderson, Molly

    2015-01-01

    The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.

  18. BLAST and FASTA similarity searching for multiple sequence alignment.

    Science.gov (United States)

    Pearson, William R

    2014-01-01

    BLAST, FASTA, and other similarity searching programs seek to identify homologous proteins and DNA sequences based on excess sequence similarity. If two sequences share much more similarity than expected by chance, the simplest explanation for the excess similarity is common ancestry-homology. The most effective similarity searches compare protein sequences, rather than DNA sequences, for sequences that encode proteins, and use expectation values, rather than percent identity, to infer homology. The BLAST and FASTA packages of sequence comparison programs provide programs for comparing protein and DNA sequences to protein databases (the most sensitive searches). Protein and translated-DNA comparisons to protein databases routinely allow evolutionary look back times from 1 to 2 billion years; DNA:DNA searches are 5-10-fold less sensitive. BLAST and FASTA can be run on popular web sites, but can also be downloaded and installed on local computers. With local installation, target databases can be customized for the sequence data being characterized. With today's very large protein databases, search sensitivity can also be improved by searching smaller comprehensive databases, for example, a complete protein set from an evolutionarily neighboring model organism. By default, BLAST and FASTA use scoring strategies target for distant evolutionary relationships; for comparisons involving short domains or queries, or searches that seek relatively close homologs (e.g. mouse-human), shallower scoring matrices will be more effective. Both BLAST and FASTA provide very accurate statistical estimates, which can be used to reliably identify protein sequences that diverged more than 2 billion years ago.

  19. An electrophysiological signature of summed similarity in visual working memory.

    Science.gov (United States)

    van Vugt, Marieke K; Sekuler, Robert; Wilson, Hugh R; Kahana, Michael J

    2013-05-01

    Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory experiments using electrocorticographic/depth electrode recordings and scalp electroencephalography. On each experimental trial, participants judged whether a test face had been among a small set of recently studied faces. Consistent with summed-similarity theory, participants' tendency to endorse a test item increased as a function of its summed similarity to the items on the just-studied list. To characterize this behavioral effect of summed similarity, we successfully fit a summed-similarity model to individual participant data from each experiment. Using the parameters determined from fitting the summed-similarity model to the behavioral data, we examined the relation between summed similarity and brain activity. We found that 4-9 Hz theta activity in the medial temporal lobe and 2-4 Hz delta activity recorded from frontal and parietal cortices increased with summed similarity. These findings demonstrate direct neural correlates of the similarity computations that form the foundation of several major cognitive theories of human recognition memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  20. Improved SPICE electrical model of silicon photomultipliers

    Energy Technology Data Exchange (ETDEWEB)

    Marano, D., E-mail: davide.marano@oact.inaf.it [INAF, Osservatorio Astrofisico di Catania, Via S. Sofia 78, I-95123 Catania (Italy); Bonanno, G.; Belluso, M.; Billotta, S.; Grillo, A.; Garozzo, S.; Romeo, G. [INAF, Osservatorio Astrofisico di Catania, Via S. Sofia 78, I-95123 Catania (Italy); Catalano, O.; La Rosa, G.; Sottile, G.; Impiombato, D.; Giarrusso, S. [INAF, Istituto di Astrofisica Spaziale e Fisica Cosmica di Palermo, Via U. La Malfa 153, I-90146 Palermo (Italy)

    2013-10-21

    The present work introduces an improved SPICE equivalent electrical model of silicon photomultiplier (SiPM) detectors, in order to simulate and predict their transient response to avalanche triggering events. In particular, the developed circuit model provides a careful investigation of the magnitude and timing of the read-out signals and can therefore be exploited to perform reliable circuit-level simulations. The adopted modeling approach is strictly related to the physics of each basic microcell constituting the SiPM device, and allows the avalanche timing as well as the photodiode current and voltage to be accurately simulated. Predictive capabilities of the proposed model are demonstrated by means of experimental measurements on a real SiPM detector. Simulated and measured pulses are found to be in good agreement with the expected results. -- Highlights: • An improved SPICE electrical model of silicon photomultipliers is proposed. • The developed model provides a truthful representation of the physics of the device. • An accurate charge collection as a function of the overvoltage is achieved. • The adopted electrical model allows reliable circuit-level simulations to be performed. • Predictive capabilities of the adopted model are experimentally demonstrated.

  1. Improvements in ECN Wake Model

    Energy Technology Data Exchange (ETDEWEB)

    Versteeg, M.C. [University of Twente, Enschede (Netherlands); Ozdemir, H.; Brand, A.J. [ECN Wind Energy, Petten (Netherlands)

    2013-08-15

    Wind turbines extract energy from the flow field so that the flow in the wake of a wind turbine contains less energy and more turbulence than the undisturbed flow, leading to less energy extraction for the downstream turbines. In large wind farms, most turbines are located in the wake of one or more turbines causing the flow characteristics felt by these turbines differ considerably from the free stream flow conditions. The most important wake effect is generally considered to be the lower wind speed behind the turbine(s) since this decreases the energy production and as such the economical performance of a wind farm. The overall loss of a wind farm is very much dependent on the conditions and the lay-out of the farm but it can be in the order of 5-10%. Apart from the loss in energy production an additional wake effect is formed by the increase in turbulence intensity, which leads to higher fatigue loads. In this sense it becomes important to understand the details of wake behavior to improve and/or optimize a wind farm layout. Within this study improvements are presented for the existing ECN wake model which constructs the fundamental basis of ECN's FarmFlow wind farm wake simulation tool. The outline of this paper is as follows: first, the governing equations of the ECN wake farm model are presented. Then the near wake modeling is discussed and the results compared with the original near wake modeling and EWTW (ECN Wind Turbine Test Site Wieringermeer) data as well as the results obtained for various near wake implementation cases are shown. The details of the atmospheric stability model are given and the comparison with the solution obtained for the original surface layer model and with the available data obtained by EWTW measurements are presented. Finally the conclusions are summarized.

  2. Improving Catastrophe Modeling for Business Interruption Insurance Needs.

    Science.gov (United States)

    Rose, Adam; Huyck, Charles K

    2016-10-01

    While catastrophe (CAT) modeling of property damage is well developed, modeling of business interruption (BI) lags far behind. One reason is the crude nature of functional relationships in CAT models that translate property damage into BI. Another is that estimating BI losses is more complicated because it depends greatly on public and private decisions during recovery with respect to resilience tactics that dampen losses by using remaining resources more efficiently to maintain business function and to recover more quickly. This article proposes a framework for improving hazard loss estimation for BI insurance needs. Improved data collection that allows for analysis at the level of individual facilities within a company can improve matching the facilities with the effectiveness of individual forms of resilience, such as accessing inventories, relocating operations, and accelerating repair, and can therefore improve estimation accuracy. We then illustrate the difference this can make in a case study example of losses from a hurricane. © 2016 Society for Risk Analysis.

  3. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    Science.gov (United States)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  4. Improved hydrogen combustion model for multi-compartment analysis

    International Nuclear Information System (INIS)

    Ogino, Masao; Hashimoto, Takashi

    2000-01-01

    NUPEC has been improving a hydrogen combustion model in MELCOR code for severe accident analysis. In the proposed combustion model, the flame velocity in a node was predicted using six different flame front shapes of fireball, prism, bubble, spherical jet, plane jet, and parallelepiped. A verification study of the proposed model was carried out using the NUPEC large-scale combustion test results following the previous work in which the GRS/Battelle multi-compartment combustion test results had been used. The selected test cases for the study were the premixed test and the scenario-oriented test which simulated the severe accident sequences of an actual plant. The improved MELCOR code replaced by the proposed model could predict sufficiently both results of the premixed test and the scenario-oriented test of NUPEC large-scale test. The improved MELCOR code was confirmed to simulate the combustion behavior in the multi-compartment containment vessel during a severe accident with acceptable degree of accuracy. Application of the new model to the LWR severe accident analysis will be continued. (author)

  5. Molecular structure based property modeling: Development/ improvement of property models through a systematic property-data-model analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol Shivajirao; Sarup, Bent; Sin, Gürkan

    2013-01-01

    models. To make the property-data-model analysis fast and efficient, an approach based on the “molecular structure similarity criteria” to identify molecules (mono-functional, bi-functional, etc.) containing specified set of structural parameters (that is, groups) is employed. The method has been applied...

  6. Improvements on Semi-Classical Distorted-Wave model

    Energy Technology Data Exchange (ETDEWEB)

    Sun Weili; Watanabe, Y.; Kuwata, R. [Kyushu Univ., Fukuoka (Japan); Kohno, M.; Ogata, K.; Kawai, M.

    1998-03-01

    A method of improving the Semi-Classical Distorted Wave (SCDW) model in terms of the Wigner transform of the one-body density matrix is presented. Finite size effect of atomic nuclei can be taken into account by using the single particle wave functions for harmonic oscillator or Wood-Saxon potential, instead of those based on the local Fermi-gas model which were incorporated into previous SCDW model. We carried out a preliminary SCDW calculation of 160 MeV (p,p`x) reaction on {sup 90}Zr with the Wigner transform of harmonic oscillator wave functions. It is shown that the present calculation of angular distributions increase remarkably at backward angles than the previous ones and the agreement with the experimental data is improved. (author)

  7. Application of data assimilation to improve the forecasting capability of an atmospheric dispersion model for a radioactive plume

    International Nuclear Information System (INIS)

    Jeong, H.J.; Han, M.H.; Hwang, W.T.; Kim, E.H.

    2008-01-01

    Modeling an atmospheric dispersion of a radioactive plume plays an influential role in assessing the environmental impacts caused by nuclear accidents. The performance of data assimilation techniques combined with Gaussian model outputs and measurements to improve forecasting abilities are investigated in this study. Tracer dispersion experiments are performed to produce field data by assuming a radiological emergency. Adaptive neuro-fuzzy inference system (ANFIS) and linear regression filter are considered to assimilate the Gaussian model outputs with measurements. ANFIS is trained so that the model outputs are likely to be more accurate for the experimental data. Linear regression filter is designed to assimilate measurements similar to the ANFIS. It is confirmed that ANFIS could be an appropriate method for an improvement of the forecasting capability of an atmospheric dispersion model in the case of a radiological emergency, judging from the higher correlation coefficients between the measured and the assimilated ones rather than a linear regression filter. This kind of data assimilation method could support a decision-making system when deciding on the best available countermeasures for public health from among emergency preparedness alternatives

  8. Initial virtual flight test for a dynamically similar aircraft model with control augmentation system

    Directory of Open Access Journals (Sweden)

    Linliang Guo

    2017-04-01

    Full Text Available To satisfy the validation requirements of flight control law for advanced aircraft, a wind tunnel based virtual flight testing has been implemented in a low speed wind tunnel. A 3-degree-of-freedom gimbal, ventrally installed in the model, was used in conjunction with an actively controlled dynamically similar model of aircraft, which was equipped with the inertial measurement unit, attitude and heading reference system, embedded computer and servo-actuators. The model, which could be rotated around its center of gravity freely by the aerodynamic moments, together with the flow field, operator and real time control system made up the closed-loop testing circuit. The model is statically unstable in longitudinal direction, and it can fly stably in wind tunnel with the function of control augmentation of the flight control laws. The experimental results indicate that the model responds well to the operator’s instructions. The response of the model in the tests shows reasonable agreement with the simulation results. The difference of response of angle of attack is less than 0.5°. The effect of stability augmentation and attitude control law was validated in the test, meanwhile the feasibility of virtual flight test technique treated as preliminary evaluation tool for advanced flight vehicle configuration research was also verified.

  9. Fuzzy Similarity and Fuzzy Inclusion Measures in Polyline Matching: A Case Study of Potential Streams Identification for Archaeological Modelling in GIS

    Science.gov (United States)

    Ďuračiová, Renata; Rášová, Alexandra; Lieskovský, Tibor

    2017-12-01

    When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.

  10. Fuzzy Similarity and Fuzzy Inclusion Measures in Polyline Matching: A Case Study of Potential Streams Identification for Archaeological Modelling in GIS

    Directory of Open Access Journals (Sweden)

    Ďuračiová Renata

    2017-12-01

    Full Text Available When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.

  11. Improved double Q2 rescaling model

    International Nuclear Information System (INIS)

    Gao Yonghua

    2001-01-01

    The authors present an improved double Q 2 rescaling model. Based on this condition of the nuclear momentum conservation, the authors have found a Q 2 rescaling parameters' formula of the model, where authors have established the connection between the Q 2 rescaling parameter ζ i (i = v, s, g) and the mean binding energy in nucleus. By using this model, the authors coned explain the experimental data of the EMC effect in the whole x region, the nuclear Drell-Yan process and J/Ψ photoproduction process

  12. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement.

    Science.gov (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J; Hammer, Graeme L

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation.

  13. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    Science.gov (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  14. Improvement and Validation of Weld Residual Stress Modelling Procedure

    International Nuclear Information System (INIS)

    Zang, Weilin; Gunnars, Jens; Dong, Pingsha; Hong, Jeong K.

    2009-06-01

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  15. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  16. Improving practical atmospheric dispersion models

    International Nuclear Information System (INIS)

    Hunt, J.C.R.; Hudson, B.; Thomson, D.J.

    1992-01-01

    The new generation of practical atmospheric dispersion model (for short range ≤ 30 km) are based on dispersion science and boundary layer meteorology which have widespread international acceptance. In addition, recent improvements in computer skills and the widespread availability of small powerful computers make it possible to have new regulatory models which are more complex than the previous generation which were based on charts and simple formulae. This paper describes the basis of these models and how they have developed. Such models are needed to satisfy the urgent public demand for sound, justifiable and consistent environmental decisions. For example, it is preferable that the same models are used to simulate dispersion in different industries; in many countries at present different models are used for emissions from nuclear and fossil fuel power stations. The models should not be so simple as to be suspect but neither should they be too complex for widespread use; for example, at public inquiries in Germany, where simple models are mandatory, it is becoming usual to cite the results from highly complex computational models because the simple models are not credible. This paper is written in a schematic style with an emphasis on tables and diagrams. (au) (22 refs.)

  17. An Improved Model for FE Modeling and Simulation of Closed Cell Al-Alloy Foams

    OpenAIRE

    Hasan, MD. Anwarul

    2010-01-01

    Cell wall material properties of Al-alloy foams have been derived by a combination of nanoindentation experiment and numerical simulation. Using the derived material properties in FE (finite element) modeling of foams, the existing constitutive models of closed-cell Al-alloy foams have been evaluated against experimental results. An improved representative model has been proposed for FE analysis of closed-cell Al-alloy foams. The improved model consists of a combination of spherical and cruci...

  18. Drought, Fire and Insects in Western US Forests: Observations to Improve Regional Land System Modeling

    Science.gov (United States)

    Law, B. E.; Yang, Z.; Berner, L. T.; Hicke, J. A.; Buotte, P.; Hudiburg, T. W.

    2015-12-01

    Drought, fire and insects are major disturbances in the western US, and conditions are expected to get warmer and drier in the future. We combine multi-scale observations and modeling with CLM4.5 to examine the effects of these disturbances on forests in the western US. We modified the Community Land Model, CLM4.5, to improve simulated drought-related mortality in forests, and prediction of insect outbreaks under future climate conditions. We examined differences in plant traits that represent species variation in sensitivity to drought, and redefined plant groupings in PFTs. Plant traits, including sapwood area: leaf area ratio and stemwood density were strongly correlated with water availability during the ecohydrologic year. Our database of co-located observations of traits for 30 tree species was used to produce parameterization of the model by species groupings according to similar traits. Burn area predicted by the new fire model in CLM4.5 compares well with recent years of GFED data, but has a positive bias compared with Landsat-based MTBS. Biomass mortality over recent decades increased, and was captured well by the model in general, but missed mortality trends of some species. Comparisons with AmeriFlux data showed that the model with dynamic tree mortality only (no species trait improvements) overestimated GPP in dry years compared with flux data at semi-arid sites, and underestimated GPP at more mesic sites that experience dry summers. Simulations with both dynamic tree mortality and species trait parameters improved estimates of GPP by 17-22%; differences between predicted and observed NEE were larger. Future projections show higher productivity from increased atmospheric CO2 and warming that somewhat offsets drought and fire effects over the next few decades. Challenges include representation of hydraulic failure in models, and availability of species trait and carbon/water process data in disturbance- and drought-impacted regions.

  19. Testing Self-Similarity Through Lamperti Transformations

    KAUST Repository

    Lee, Myoungji

    2016-07-14

    Self-similar processes have been widely used in modeling real-world phenomena occurring in environmetrics, network traffic, image processing, and stock pricing, to name but a few. The estimation of the degree of self-similarity has been studied extensively, while statistical tests for self-similarity are scarce and limited to processes indexed in one dimension. This paper proposes a statistical hypothesis test procedure for self-similarity of a stochastic process indexed in one dimension and multi-self-similarity for a random field indexed in higher dimensions. If self-similarity is not rejected, our test provides a set of estimated self-similarity indexes. The key is to test stationarity of the inverse Lamperti transformations of the process. The inverse Lamperti transformation of a self-similar process is a strongly stationary process, revealing a theoretical connection between the two processes. To demonstrate the capability of our test, we test self-similarity of fractional Brownian motions and sheets, their time deformations and mixtures with Gaussian white noise, and the generalized Cauchy family. We also apply the self-similarity test to real data: annual minimum water levels of the Nile River, network traffic records, and surface heights of food wrappings. © 2016, International Biometric Society.

  20. Understanding catchment behaviour through model concept improvement

    NARCIS (Netherlands)

    Fenicia, F.

    2008-01-01

    This thesis describes an approach to model development based on the concept of iterative model improvement, which is a process where by trial and error different hypotheses of catchment behaviour are progressively tested, and the understanding of the system proceeds through a combined process of

  1. Natural texture retrieval based on perceptual similarity measurement

    Science.gov (United States)

    Gao, Ying; Dong, Junyu; Lou, Jianwen; Qi, Lin; Liu, Jun

    2018-04-01

    A typical texture retrieval system performs feature comparison and might not be able to make human-like judgments of image similarity. Meanwhile, it is commonly known that perceptual texture similarity is difficult to be described by traditional image features. In this paper, we propose a new texture retrieval scheme based on texture perceptual similarity. The key of the proposed scheme is that prediction of perceptual similarity is performed by learning a non-linear mapping from image features space to perceptual texture space by using Random Forest. We test the method on natural texture dataset and apply it on a new wallpapers dataset. Experimental results demonstrate that the proposed texture retrieval scheme with perceptual similarity improves the retrieval performance over traditional image features.

  2. Improved Mathematical Models for Particle-Size Distribution Data

    African Journals Online (AJOL)

    BirukEdimon

    School of Civil & Environmental Engineering, Addis Ababa Institute of Technology,. 3. Murray Rix ... two improved mathematical models to describe ... demand further improvement to handle the PSD ... statistics and the range of the optimized.

  3. Improving the Ni I atomic model for solar and stellar atmospheric models

    International Nuclear Information System (INIS)

    Vieytes, M. C.; Fontenla, J. M.

    2013-01-01

    Neutral nickel (Ni I) is abundant in the solar atmosphere and is one of the important elements that contribute to the emission and absorption of radiation in the spectral range between 1900 and 3900 Å. Previously, the Solar Radiation Physical Modeling (SRPM) models of the solar atmosphere only considered a few levels of this species. Here, we improve the Ni I atomic model by taking into account 61 levels and 490 spectral lines. We compute the populations of these levels in full NLTE using the SRPM code and compare the resulting emerging spectrum with observations. The present atomic model significantly improves the calculation of the solar spectral irradiance at near-UV wavelengths, which is important for Earth atmospheric studies, and particularly for ozone chemistry.

  4. Improving the Ni I atomic model for solar and stellar atmospheric models

    Energy Technology Data Exchange (ETDEWEB)

    Vieytes, M. C. [Instituto de de Astronomía y Física del Espacio, CONICET and UNTREF, Buenos Aires (Argentina); Fontenla, J. M., E-mail: mariela@iafe.uba.ar, E-mail: johnf@digidyna.com [North West Research Associates, 3380 Mitchell Lane, Boulder, CO 80301 (United States)

    2013-06-01

    Neutral nickel (Ni I) is abundant in the solar atmosphere and is one of the important elements that contribute to the emission and absorption of radiation in the spectral range between 1900 and 3900 Å. Previously, the Solar Radiation Physical Modeling (SRPM) models of the solar atmosphere only considered a few levels of this species. Here, we improve the Ni I atomic model by taking into account 61 levels and 490 spectral lines. We compute the populations of these levels in full NLTE using the SRPM code and compare the resulting emerging spectrum with observations. The present atomic model significantly improves the calculation of the solar spectral irradiance at near-UV wavelengths, which is important for Earth atmospheric studies, and particularly for ozone chemistry.

  5. Improved ionic model of liquid uranium dioxide

    NARCIS (Netherlands)

    Gryaznov, [No Value; Iosilevski, [No Value; Yakub, E; Fortov, [No Value; Hyland, GJ; Ronchi, C

    The paper presents a model for liquid uranium dioxide, obtained by improving a simplified ionic model, previously adopted to describe the equation of state of this substance [1]. A "chemical picture" is used for liquid UO2 of stoichiometric and non-stoichiometric composition. Several ionic species

  6. Improved gap conductance model for the TRAC code

    International Nuclear Information System (INIS)

    Hatch, S.W.; Mandell, D.A.

    1980-01-01

    The purpose of the present work, as indicated earlier, is to improve the present constant fuel clad spacing in TRAC-P1A without significantly increasing the computer costs. It is realized that the simple model proposed may not be accurate enough for some cases, but for the initial calculations made the DELTAR model improves the predictions over the constant Δr results of TRAC-P1A and the additional computing costs are negligible

  7. State impulsive control strategies for a two-languages competitive model with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Nie, Lin-Fei; Teng, Zhi-Dong; Nieto, Juan J.; Jung, Il Hyo

    2015-07-01

    For reasons of preserving endangered languages, we propose, in this paper, a novel two-languages competitive model with bilingualism and interlinguistic similarity, where state-dependent impulsive control strategies are introduced. The novel control model includes two control threshold values, which are different from the previous state-dependent impulsive differential equations. By using qualitative analysis method, we obtain that the control model exhibits two stable positive order-1 periodic solutions under some general conditions. Moreover, numerical simulations clearly illustrate the main theoretical results and feasibility of state-dependent impulsive control strategies. Meanwhile numerical simulations also show that state-dependent impulsive control strategy can be applied to other general two-languages competitive model and obtain the desired result. The results indicate that the fractions of two competitive languages can be kept within a reasonable level under almost any circumstances. Theoretical basis for finding a new control measure to protect the endangered language is offered.

  8. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  9. Automated pattern analysis in gesture research : similarity measuring in 3D motion capture models of communicative action

    NARCIS (Netherlands)

    Schueller, D.; Beecks, C.; Hassani, M.; Hinnell, J.; Brenger, B.; Seidl, T.; Mittelberg, I.

    2017-01-01

    The question of how to model similarity between gestures plays an important role in current studies in the domain of human communication. Most research into recurrent patterns in co-verbal gestures – manual communicative movements emerging spontaneously during conversation – is driven by qualitative

  10. A model for ageing-home-care service process improvement

    OpenAIRE

    Yu, Shu-Yan; Shie, An-Jin

    2017-01-01

    The purpose of this study was to develop an integrated model to improve service processes in ageing-home-care. According to the literature, existing service processes have potential service failures that affect service quality and efficacy. However, most previous studies have only focused on conceptual model development using New Service Development (NSD) and fail to provide a systematic model to analyse potential service failures and facilitate managers developing solutions to improve the se...

  11. Improved Modeling and Prediction of Surface Wave Amplitudes

    Science.gov (United States)

    2017-05-31

    AFRL-RV-PS- AFRL-RV-PS- TR-2017-0162 TR-2017-0162 IMPROVED MODELING AND PREDICTION OF SURFACE WAVE AMPLITUDES Jeffry L. Stevens, et al. Leidos...data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented...SUBTITLE Improved Modeling and Prediction of Surface Wave Amplitudes 5a. CONTRACT NUMBER FA9453-14-C-0225 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  12. A Similarity Analysis of Audio Signal to Develop a Human Activity Recognition Using Similarity Networks

    Directory of Open Access Journals (Sweden)

    Alejandra García-Hernández

    2017-11-01

    Full Text Available Human Activity Recognition (HAR is one of the main subjects of study in the areas of computer vision and machine learning due to the great benefits that can be achieved. Examples of the study areas are: health prevention, security and surveillance, automotive research, and many others. The proposed approaches are carried out using machine learning techniques and present good results. However, it is difficult to observe how the descriptors of human activities are grouped. In order to obtain a better understanding of the the behavior of descriptors, it is important to improve the abilities to recognize the human activities. This paper proposes a novel approach for the HAR based on acoustic data and similarity networks. In this approach, we were able to characterize the sound of the activities and identify those activities looking for similarity in the sound pattern. We evaluated the similarity of the sounds considering mainly two features: the sound location and the materials that were used. As a result, the materials are a good reference classifying the human activities compared with the location.

  13. A Unified Framework for Systematic Model Improvement

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2003-01-01

    A unified framework for improving the quality of continuous time models of dynamic systems based on experimental data is presented. The framework is based on an interplay between stochastic differential equation (SDE) modelling, statistical tests and multivariate nonparametric regression. This co......-batch bioreactor, where it is illustrated how an incorrectly modelled biomass growth rate can be pinpointed and an estimate provided of the functional relation needed to properly describe it....

  14. Mechanics of ultra-stretchable self-similar serpentine interconnects

    International Nuclear Information System (INIS)

    Zhang, Yihui; Fu, Haoran; Su, Yewang; Xu, Sheng

    2013-01-01

    Graphical abstract: We developed analytical models of flexibility and elastic-stretchability for self-similar interconnect. The analytic solutions agree very well with the finite element analyses, both demonstrating that the elastic-stretchability more than doubles when the order of self-similar structure increases by one. Design optimization yields 90% and 50% elastic stretchability for systems with surface filling ratios of 50% and 70% of active devices, respectively. The analytic models are useful for the development of stretchable electronics that simultaneously demand large coverage of active devices, such as stretchable photovoltaics and electronic eye-ball cameras. -- Abstract: Electrical interconnects that adopt self-similar, serpentine layouts offer exceptional levels of stretchability in systems that consist of collections of small, non-stretchable active devices in the so-called island–bridge design. This paper develops analytical models of flexibility and elastic stretchability for such structures, and establishes recursive formulae at different orders of self-similarity. The analytic solutions agree well with finite element analysis, with both demonstrating that the elastic stretchability more than doubles when the order of the self-similar structure increases by one. Design optimization yields 90% and 50% elastic stretchability for systems with surface filling ratios of 50% and 70% of active devices, respectively

  15. The role of visual similarity and memory in body model distortions.

    Science.gov (United States)

    Saulton, Aurelie; Longo, Matthew R; Wong, Hong Yu; Bülthoff, Heinrich H; de la Rosa, Stephan

    2016-02-01

    Several studies have shown that the perception of one's own hand size is distorted in proprioceptive localization tasks. It has been suggested that those distortions mirror somatosensory anisotropies. Recent research suggests that non-corporeal items also show some spatial distortions. In order to investigate the psychological processes underlying the localization task, we investigated the influences of visual similarity and memory on distortions observed on corporeal and non-corporeal items. In experiment 1, participants indicated the location of landmarks on: their own hand, a rubber hand (rated as most similar to the real hand), and a rake (rated as least similar to the real hand). Results show no significant differences between rake and rubber hand distortions but both items were significantly less distorted than the hand. Experiments 2 and 3 explored the role of memory in spatial distance judgments of the hand, the rake and the rubber hand. Spatial representations of items measured in experiments 2 and 3 were also distorted but showed the tendency to be smaller than in localization tasks. While memory and visual similarity seem to contribute to explain qualitative similarities in distortions between the hand and non-corporeal items, those factors cannot explain the larger magnitude observed in hand distortions. Copyright © 2015. Published by Elsevier B.V.

  16. Motivation to Improve Work through Learning: A Conceptual Model

    Directory of Open Access Journals (Sweden)

    Kueh Hua Ng

    2014-12-01

    Full Text Available This study aims to enhance our current understanding of the transfer of training by proposing a conceptual model that supports the mediating role of motivation to improve work through learning about the relationship between social support and the transfer of training. The examination of motivation to improve work through motivation to improve work through a learning construct offers a holistic view pertaining to a learner's profile in a workplace setting, which emphasizes learning for the improvement of work performance. The proposed conceptual model is expected to benefit human resource development theory building, as well as field practitioners by emphasizing the motivational aspects crucial for successful transfer of training.

  17. Sex similarities and differences in risk factors for recurrence of major depression.

    Science.gov (United States)

    van Loo, Hanna M; Aggen, Steven H; Gardner, Charles O; Kendler, Kenneth S

    2017-11-27

    Major depression (MD) occurs about twice as often in women as in men, but it is unclear whether sex differences subsist after disease onset. This study aims to elucidate potential sex differences in rates and risk factors for MD recurrence, in order to improve prediction of course of illness and understanding of its underlying mechanisms. We used prospective data from a general population sample (n = 653) that experienced a recent episode of MD. A diverse set of potential risk factors for recurrence of MD was analyzed using Cox models subject to elastic net regularization for males and females separately. Accuracy of the prediction models was tested in same-sex and opposite-sex test data. Additionally, interactions between sex and each of the risk factors were investigated to identify potential sex differences. Recurrence rates and the impact of most risk factors were similar for men and women. For both sexes, prediction models were highly multifactorial including risk factors such as comorbid anxiety, early traumas, and family history. Some subtle sex differences were detected: for men, prediction models included more risk factors concerning characteristics of the depressive episode and family history of MD and generalized anxiety, whereas for women, models included more risk factors concerning early and recent adverse life events and socioeconomic problems. No prominent sex differences in risk factors for recurrence of MD were found, potentially indicating similar disease maintaining mechanisms for both sexes. Course of MD is a multifactorial phenomenon for both males and females.

  18. Improved transition models for cepstral trajectories

    CSIR Research Space (South Africa)

    Badenhorst, J

    2012-11-01

    Full Text Available We improve on a piece-wise linear model of the trajectories of Mel Frequency Cepstral Coefficients, which are commonly used as features in Automatic Speech Recognition. For this purpose, we have created a very clean single-speaker corpus, which...

  19. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  20. An improved large signal model of InP HEMTs

    Science.gov (United States)

    Li, Tianhao; Li, Wenjun; Liu, Jun

    2018-05-01

    An improved large signal model for InP HEMTs is proposed in this paper. The channel current and charge model equations are constructed based on the Angelov model equations. Both the equations for channel current and gate charge models were all continuous and high order drivable, and the proposed gate charge model satisfied the charge conservation. For the strong leakage induced barrier reduction effect of InP HEMTs, the Angelov current model equations are improved. The channel current model could fit DC performance of devices. A 2 × 25 μm × 70 nm InP HEMT device is used to demonstrate the extraction and validation of the model, in which the model has predicted the DC I–V, C–V and bias related S parameters accurately. Project supported by the National Natural Science Foundation of China (No. 61331006).

  1. Spherically symmetric self-similar universe

    Energy Technology Data Exchange (ETDEWEB)

    Dyer, C C [Toronto Univ., Ontario (Canada)

    1979-10-01

    A spherically symmetric self-similar dust-filled universe is considered as a simple model of a hierarchical universe. Observable differences between the model in parabolic expansion and the corresponding homogeneous Einstein-de Sitter model are considered in detail. It is found that an observer at the centre of the distribution has a maximum observable redshift and can in principle see arbitrarily large blueshifts. It is found to yield an observed density-distance law different from that suggested by the observations of de Vaucouleurs. The use of these solutions as central objects for Swiss-cheese vacuoles is discussed.

  2. Investigating the effect of non-similar fins in thermoeconomic optimization of plate fin heat exchanger

    International Nuclear Information System (INIS)

    Hajabdollahi, Hassan

    2015-01-01

    Thermoeconomic optimization of plate fin heat exchanger with similar (SF) and different (DF) or non-similar fin in each side is presented in this work. For this purpose, both heat exchanger effectiveness and total annual cost (TAC) are optimized simultaneously using multi-objective particle swarm optimization algorithm. The above procedure is performed for various mass flow rates in each side. The optimum results reveal that no thermoeconomic improvement is observed in the case of same mass flow rate in each side while both effectiveness and TAC are improved in the case of different mass flow rate. For example, effectiveness and TAC are improved 0.95% and 10.17% respectively, for the DF compared with SF. In fact, the fin configuration should be selected more compact in a side with lower mass flow rate compared with the other side in the thermoeconomic viewpoint. Furthermore, for the thermodynamic optimization viewpoint both SF and DF have the same optimum result while for the economic (or thermoeconomic) optimization viewpoint, the significant decrease in TAC is accessible in the case of DF compared with SF. - Highlights: • Thermoeconomic modeling of compact heat exchanger. • Selection of fin and heat exchanger geometries as nine decision variables. • Applying MOPSO algorithm for multi objective optimization. • Considering the similar and different fin specification in each side. • Investigation of optimum design parameters for various mass flow rates

  3. Intelligent Models Performance Improvement Based on Wavelet Algorithm and Logarithmic Transformations in Suspended Sediment Estimation

    Directory of Open Access Journals (Sweden)

    R. Hajiabadi

    2016-10-01

    data are applied to models training and one year is estimated by each model. Accuracy of models is evaluated by three indexes. These three indexes are mean absolute error (MAE, root mean squared error (RMSE and Nash-Sutcliffecoefficient (NS. Results and Discussion In order to suspended sediment load estimation by intelligent models, different input combination for model training evaluated. Then the best combination of input for each intelligent model is determined and preprocessing is done only for the best combination. Two logarithmic transforms, LN and LOG, considered to data transformation. Daubechies wavelet family is used as wavelet transforms. Results indicate that diagnosing causes Nash Sutcliffe criteria in ANN and GEPincreases 0.15 and 0.14, respectively. Furthermore, RMSE value has been reduced from 199.24 to 141.17 (mg/lit in ANN and from 234.84 to 193.89 (mg/lit in GEP. The impact of the logarithmic transformation approach on the ANN result improvement is similar to diagnosing approach. While the logarithmic transformation approach has an adverse impact on GEP. Nash Sutcliffe criteria, after Ln and Log transformations as preprocessing in GEP model, has been reduced from 0.57 to 0.31 and 0.21, respectively, and RMSE value increases from 234.84 to 298.41 (mg/lit and 318.72 (mg/lit respectively. Results show that data denoising by wavelet transform is effective for improvement of two intelligent model accuracy, while data transformation by logarithmic transformation causes improvement only in artificial neural network. Results of the ANN model reveal that data transformation by LN transfer is better than LOG transfer, however both transfer function cause improvement in ANN results. Also denoising by different wavelet transforms (Daubechies family indicates that in ANN models the wavelet function Db2 is more effective and causes more improvement while on GEP models the wavelet function Db1 (Harr is better. Conclusions: In the present study, two different

  4. Improvement of the design model for SMART fuel assembly

    International Nuclear Information System (INIS)

    Zee, Sung Kyun; Yim, Jeong Sik

    2001-04-01

    A Study on the design improvement of the TEP, BEP and Hoddown spring of a fuel assembly for SMART was performed. Cut boundary Interpolation Method was applied to get more accurate results of stress and strain distribution from the results of the coarse model calculation. The improved results were compared with that of a coarse one. The finer model predicted slightly higher stress and strain distribution than the coarse model, which meant the results of the coarse model was not converged. Considering that the test results always showed much less stress than the FEM and the location of the peak stress of the refined model, the pressure stress on the loading point seemed to contribute significantly to the stresses. Judging from the fact that the peak stress appeared only at the local area, the results of the refined model were considered enough to be a conservative prediction of the stress levels. The slot of the guide thimble screw was ignored to get how much thickness of the flow plate can be reduced in case of optimization of the thickness and also cut off the screw dent hole was included for the actual geometry. For the BEP, the leg and web were also included in the model and the results with and without the leg alignment support were compared. Finally, the holddown spring which is important during the in-reactor behavior of the FA was modeled more realistic and improved to include the effects of the friction between the leaves and the loading surface. Using this improved model, it was possible that the spring characteristics were predicted more accurate to the test results. From the analysis of the spring characteristics, the local plastic area controled the characteristics of the spring dominantly which implied that it was necessary for the design of the leaf to be optimized for the improvement of the plastic behavior of the leaf spring

  5. General relativistic self-similar waves that induce an anomalous acceleration into the standard model of cosmology

    CERN Document Server

    Smoller, Joel

    2012-01-01

    We prove that the Einstein equations in Standard Schwarzschild Coordinates close to form a system of three ordinary differential equations for a family of spherically symmetric, self-similar expansion waves, and the critical ($k=0$) Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology (FRW), is embedded as a single point in this family. Removing a scaling law and imposing regularity at the center, we prove that the family reduces to an implicitly defined one parameter family of distinct spacetimes determined by the value of a new {\\it acceleration parameter} $a$, such that $a=1$ corresponds to FRW. We prove that all self-similar spacetimes in the family are distinct from the non-critical $k\

  6. Highly Adoptable Improvement: A Practical Model and Toolkit to Address Adoptability and Sustainability of Quality Improvement Initiatives.

    Science.gov (United States)

    Hayes, Christopher William; Goldmann, Don

    2018-03-01

    Failure to consider the impact of change on health care providers is a barrier to success. Initiatives that increase workload and have low perceived value are less likely to be adopted. A practical model and supporting tools were developed on the basis of existing theories to help quality improvement (QI) programs design more adoptable approaches. Models and theories from the diffusion of innovation and work stress literature were reviewed, and key-informant interviews and site visits were conducted to develop a draft Highly Adoptable Improvement (HAI) Model. A list of candidate factors considered for inclusion in the draft model was presented to an expert panel. A modified Delphi process was used to narrow the list of factors into main themes and refine the model. The resulting model and supporting tools were pilot tested by 16 improvement advisors for face validity and usability. The HAI Model depicts how workload and perceived value influence adoptability of QI initiatives. The supporting tools include an assessment guide and suggested actions that QI programs can use to help design interventions that are likely to be adopted. Improvement advisors reported good face validity and usability and found that the model and the supporting tools helped address key issues related to adoption and reported that they would continue to use them. The HAI Model addresses important issues regarding workload and perceived value of improvement initiatives. Pilot testing suggests that the model and supporting tools are helpful and practical in guiding design and implementation of adoptable and sustainable QI interventions. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  7. A cluster-randomised quality improvement study to improve two inpatient stroke quality indicators.

    Science.gov (United States)

    Williams, Linda; Daggett, Virginia; Slaven, James E; Yu, Zhangsheng; Sager, Danielle; Myers, Jennifer; Plue, Laurie; Woodward-Hagg, Heather; Damush, Teresa M

    2016-04-01

    Quality indicator collection and feedback improves stroke care. We sought to determine whether quality improvement training plus indicator feedback was more effective than indicator feedback alone in improving inpatient stroke indicators. We conducted a cluster-randomised quality improvement trial, randomising hospitals to quality improvement training plus indicator feedback versus indicator feedback alone to improve deep vein thrombosis (DVT) prophylaxis and dysphagia screening. Intervention sites received collaborative-based quality improvement training, external facilitation and indicator feedback. Control sites received only indicator feedback. We compared indicators pre-implementation (pre-I) to active implementation (active-I) and post-implementation (post-I) periods. We constructed mixed-effect logistic models of the two indicators with a random intercept for hospital effect, adjusting for patient, time, intervention and hospital variables. Patients at intervention sites (1147 admissions), had similar race, gender and National Institutes of Health Stroke Scale scores to control sites (1017 admissions). DVT prophylaxis improved more in intervention sites during active-I period (ratio of ORs 4.90, pimproved similarly in both groups during active-I, but control sites improved more in post-I period (ratio of ORs 0.67, p=0.04). In logistic models, the intervention was independently positively associated with DVT performance during active-I period, and negatively associated with dysphagia performance post-I period. Quality improvement training was associated with early DVT improvement, but the effect was not sustained over time and was not seen with dysphagia screening. External quality improvement programmes may quickly boost performance but their effect may vary by indicator and may not sustain over time. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  8. Improvement of a combustion model in MELCOR code

    International Nuclear Information System (INIS)

    Ogino, Masao; Hashimoto, Takashi

    1999-01-01

    NUPEC has been improving a hydrogen combustion model in MELCOR code for severe accident analysis. In the proposed combustion model, the flame velocity in a node was predicted using five different flame front shapes of fireball, prism, bubble, spherical jet, and plane jet. For validation of the proposed model, the results of the Battelle multi-compartment hydrogen combustion test were used. The selected test cases for the study were Hx-6, 13, 14, 20 and Ix-2 which had two, three or four compartments under homogeneous hydrogen concentration of 5 to 10 vol%. The proposed model could predict well the combustion behavior in multi-compartment containment geometry on the whole. MELCOR code, incorporating the present combustion model, can simulate combustion behavior during severe accident with acceptable computing time and some degree of accuracy. The applicability study of the improved MELCOR code to the actual reactor plants will be further continued. (author)

  9. Self-similar Langmuir collapse at critical dimension

    International Nuclear Information System (INIS)

    Berge, L.; Dousseau, Ph.; Pelletier, G.; Pesme, D.

    1991-01-01

    Two spherically symmetric versions of a self-similar collapse are investigated within the framework of the Zakharov equations, namely, one relative to a vectorial electric field and the other corresponding to a scalar modeling of the Langmuir field. Singular solutions of both of them depend on a linear time contraction rate ξ(t) = V(t * -t), where t * and V = -ξ denote, respectively, the collapse time and the constant collapse velocity. It is shown that under certain conditions, only the scalar model admits self-similar solutions, varying regularly as a function of the control parameter V from the subsonic (V >1) regime. (author)

  10. Improved Solar-Radiation-Pressure Models for GPS Satellites

    Science.gov (United States)

    Bar-Sever, Yoaz; Kuang, Da

    2006-01-01

    A report describes a series of computational models conceived as an improvement over prior models for determining effects of solar-radiation pressure on orbits of Global Positioning System (GPS) satellites. These models are based on fitting coefficients of Fourier functions of Sun-spacecraft- Earth angles to observed spacecraft orbital motions.

  11. Accuracy test for link prediction in terms of similarity index: The case of WS and BA models

    Science.gov (United States)

    Ahn, Min-Woo; Jung, Woo-Sung

    2015-07-01

    Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.

  12. Development of an equipment management model to improve effectiveness of processes

    International Nuclear Information System (INIS)

    Chang, H. S.; Ju, T. Y.; Song, T. Y.

    2012-01-01

    The nuclear industries have developed and are trying to create a performance model to improve effectiveness of the processes implemented at nuclear plants in order to enhance performance. Most high performing nuclear stations seek to continually improve the quality of their operations by identifying and closing important performance gaps. Thus, many utilities have implemented performance models adjusted to their plant's configuration and have instituted policies for such models. KHNP is developing a standard performance model to integrate the engineering processes and to improve the inter-relation among processes. The model, called the Standard Equipment Management Model (SEMM), is under development first by focusing on engineering processes and performance improvement processes related to plant equipment used at the site. This model includes performance indicators for each process that can allow evaluating and comparing the process performance among 21 operating units. The model will later be expanded to incorporate cost and management processes. (authors)

  13. Different relationships between temporal phylogenetic turnover and phylogenetic similarity and in two forests were detected by a new null model.

    Science.gov (United States)

    Huang, Jian-Xiong; Zhang, Jian; Shen, Yong; Lian, Ju-yu; Cao, Hong-lin; Ye, Wan-hui; Wu, Lin-fang; Bin, Yue

    2014-01-01

    Ecologists have been monitoring community dynamics with the purpose of understanding the rates and causes of community change. However, there is a lack of monitoring of community dynamics from the perspective of phylogeny. We attempted to understand temporal phylogenetic turnover in a 50 ha tropical forest (Barro Colorado Island, BCI) and a 20 ha subtropical forest (Dinghushan in southern China, DHS). To obtain temporal phylogenetic turnover under random conditions, two null models were used. The first shuffled names of species that are widely used in community phylogenetic analyses. The second simulated demographic processes with careful consideration on the variation in dispersal ability among species and the variations in mortality both among species and among size classes. With the two models, we tested the relationships between temporal phylogenetic turnover and phylogenetic similarity at different spatial scales in the two forests. Results were more consistent with previous findings using the second null model suggesting that the second null model is more appropriate for our purposes. With the second null model, a significantly positive relationship was detected between phylogenetic turnover and phylogenetic similarity in BCI at a 10 m×10 m scale, potentially indicating phylogenetic density dependence. This relationship in DHS was significantly negative at three of five spatial scales. This could indicate abiotic filtering processes for community assembly. Using variation partitioning, we found phylogenetic similarity contributed to variation in temporal phylogenetic turnover in the DHS plot but not in BCI plot. The mechanisms for community assembly in BCI and DHS vary from phylogenetic perspective. Only the second null model detected this difference indicating the importance of choosing a proper null model.

  14. Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement.

    Science.gov (United States)

    Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B

    2007-03-01

    The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.

  15. On self-similarity of crack layer

    Science.gov (United States)

    Botsis, J.; Kunin, B.

    1987-01-01

    The crack layer (CL) theory of Chudnovsky (1986), based on principles of thermodynamics of irreversible processes, employs a crucial hypothesis of self-similarity. The self-similarity hypothesis states that the value of the damage density at a point x of the active zone at a time t coincides with that at the corresponding point in the initial (t = 0) configuration of the active zone, the correspondence being given by a time-dependent affine transformation of the space variables. In this paper, the implications of the self-similarity hypothesis for qusi-static CL propagation is investigated using polystyrene as a model material and examining the evolution of damage distribution along the trailing edge which is approximated by a straight segment perpendicular to the crack path. The results support the self-similarity hypothesis adopted by the CL theory.

  16. Improving Representational Competence with Concrete Models

    Science.gov (United States)

    Stieff, Mike; Scopelitis, Stephanie; Lira, Matthew E.; DeSutter, Dane

    2016-01-01

    Representational competence is a primary contributor to student learning in science, technology, engineering, and math (STEM) disciplines and an optimal target for instruction at all educational levels. We describe the design and implementation of a learning activity that uses concrete models to improve students' representational competence and…

  17. An Improved Walk Model for Train Movement on Railway Network

    International Nuclear Information System (INIS)

    Li Keping; Mao Bohua; Gao Ziyou

    2009-01-01

    In this paper, we propose an improved walk model for simulating the train movement on railway network. In the proposed method, walkers represent trains. The improved walk model is a kind of the network-based simulation analysis model. Using some management rules for walker movement, walker can dynamically determine its departure and arrival times at stations. In order to test the proposed method, we simulate the train movement on a part of railway network. The numerical simulation and analytical results demonstrate that the improved model is an effective tool for simulating the train movement on railway network. Moreover, it can well capture the characteristic behaviors of train scheduling in railway traffic. (general)

  18. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  19. New similarity of triangular fuzzy number and its application.

    Science.gov (United States)

    Zhang, Xixiang; Ma, Weimin; Chen, Liping

    2014-01-01

    The similarity of triangular fuzzy numbers is an important metric for application of it. There exist several approaches to measure similarity of triangular fuzzy numbers. However, some of them are opt to be large. To make the similarity well distributed, a new method SIAM (Shape's Indifferent Area and Midpoint) to measure triangular fuzzy number is put forward, which takes the shape's indifferent area and midpoint of two triangular fuzzy numbers into consideration. Comparison with other similarity measurements shows the effectiveness of the proposed method. Then, it is applied to collaborative filtering recommendation to measure users' similarity. A collaborative filtering case is used to illustrate users' similarity based on cloud model and triangular fuzzy number; the result indicates that users' similarity based on triangular fuzzy number can obtain better discrimination. Finally, a simulated collaborative filtering recommendation system is developed which uses cloud model and triangular fuzzy number to express users' comprehensive evaluation on items, and result shows that the accuracy of collaborative filtering recommendation based on triangular fuzzy number is higher.

  20. Model-driven approach to data collection and reporting for quality improvement.

    Science.gov (United States)

    Curcin, Vasa; Woodcock, Thomas; Poots, Alan J; Majeed, Azeem; Bell, Derek

    2014-12-01

    Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Testing statistical self-similarity in the topology of river networks

    Science.gov (United States)

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2010-01-01

    Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.

  2. Models for Evaluating and Improving Architecture Competence

    National Research Council Canada - National Science Library

    Bass, Len; Clements, Paul; Kazman, Rick; Klein, Mark

    2008-01-01

    ... producing high-quality architectures. This report lays out the basic concepts of software architecture competence and describes four models for explaining, measuring, and improving the architecture competence of an individual...

  3. Improvement of Reynolds-Stress and Triple-Product Lag Models

    Science.gov (United States)

    Olsen, Michael E.; Lillard, Randolph P.

    2017-01-01

    The Reynolds-stress and triple product Lag models were created with a normal stress distribution which was denied by a 4:3:2 distribution of streamwise, spanwise and wall normal stresses, and a ratio of r(sub w) = 0.3k in the log layer region of high Reynolds number flat plate flow, which implies R11(+)= [4/(9/2)*.3] approximately 2.96. More recent measurements show a more complex picture of the log layer region at high Reynolds numbers. The first cut at improving these models along with the direction for future refinements is described. Comparison with recent high Reynolds number data shows areas where further work is needed, but also shows inclusion of the modeled turbulent transport terms improve the prediction where they influence the solution. Additional work is needed to make the model better match experiment, but there is significant improvement in many of the details of the log layer behavior.

  4. Improving the Yule-Nielsen modified Neugebauer model by dot surface coverages depending on the ink superposition conditions

    Science.gov (United States)

    Hersch, Roger David; Crete, Frederique

    2005-01-01

    Dot gain is different when dots are printed alone, printed in superposition with one ink or printed in superposition with two inks. In addition, the dot gain may also differ depending on which solid ink the considered halftone layer is superposed. In a previous research project, we developed a model for computing the effective surface coverage of a dot according to its superposition conditions. In the present contribution, we improve the Yule-Nielsen modified Neugebauer model by integrating into it our effective dot surface coverage computation model. Calibration of the reproduction curves mapping nominal to effective surface coverages in every superposition condition is carried out by fitting effective dot surfaces which minimize the sum of square differences between the measured reflection density spectra and reflection density spectra predicted according to the Yule-Nielsen modified Neugebauer model. In order to predict the reflection spectrum of a patch, its known nominal surface coverage values are converted into effective coverage values by weighting the contributions from different reproduction curves according to the weights of the contributing superposition conditions. We analyze the colorimetric prediction improvement brought by our extended dot surface coverage model for clustered-dot offset prints, thermal transfer prints and ink-jet prints. The color differences induced by the differences between measured reflection spectra and reflection spectra predicted according to the new dot surface estimation model are quantified on 729 different cyan, magenta, yellow patches covering the full color gamut. As a reference, these differences are also computed for the classical Yule-Nielsen modified spectral Neugebauer model incorporating a single halftone reproduction curve for each ink. Taking into account dot surface coverages according to different superposition conditions considerably improves the predictions of the Yule-Nielsen modified Neugebauer model. In

  5. Animal models to improve our understanding and treatment of suicidal behavior

    Science.gov (United States)

    Gould, T D; Georgiou, P; Brenner, L A; Brundin, L; Can, A; Courtet, P; Donaldson, Z R; Dwivedi, Y; Guillaume, S; Gottesman, I I; Kanekar, S; Lowry, C A; Renshaw, P F; Rujescu, D; Smith, E G; Turecki, G; Zanos, P; Zarate, C A; Zunszain, P A; Postolache, T T

    2017-01-01

    Worldwide, suicide is a leading cause of death. Although a sizable proportion of deaths by suicide may be preventable, it is well documented that despite major governmental and international investments in research, education and clinical practice suicide rates have not diminished and are even increasing among several at-risk populations. Although nonhuman animals do not engage in suicidal behavior amenable to translational studies, we argue that animal model systems are necessary to investigate candidate endophenotypes of suicidal behavior and the neurobiology underlying these endophenotypes. Animal models are similarly a critical resource to help delineate treatment targets and pharmacological means to improve our ability to manage the risk of suicide. In particular, certain pathophysiological pathways to suicidal behavior, including stress and hypothalamic–pituitary–adrenal axis dysfunction, neurotransmitter system abnormalities, endocrine and neuroimmune changes, aggression, impulsivity and decision-making deficits, as well as the role of critical interactions between genetic and epigenetic factors, development and environmental risk factors can be modeled in laboratory animals. We broadly describe human biological findings, as well as protective effects of medications such as lithium, clozapine, and ketamine associated with modifying risk of engaging in suicidal behavior that are readily translatable to animal models. Endophenotypes of suicidal behavior, studied in animal models, are further useful for moving observed associations with harmful environmental factors (for example, childhood adversity, mechanical trauma aeroallergens, pathogens, inflammation triggers) from association to causation, and developing preventative strategies. Further study in animals will contribute to a more informed, comprehensive, accelerated and ultimately impactful suicide research portfolio. PMID:28398339

  6. [-25]A Similarity Analysis of Audio Signal to Develop a Human Activity Recognition Using Similarity Networks.

    Science.gov (United States)

    García-Hernández, Alejandra; Galván-Tejada, Carlos E; Galván-Tejada, Jorge I; Celaya-Padilla, José M; Gamboa-Rosales, Hamurabi; Velasco-Elizondo, Perla; Cárdenas-Vargas, Rogelio

    2017-11-21

    Human Activity Recognition (HAR) is one of the main subjects of study in the areas of computer vision and machine learning due to the great benefits that can be achieved. Examples of the study areas are: health prevention, security and surveillance, automotive research, and many others. The proposed approaches are carried out using machine learning techniques and present good results. However, it is difficult to observe how the descriptors of human activities are grouped. In order to obtain a better understanding of the the behavior of descriptors, it is important to improve the abilities to recognize the human activities. This paper proposes a novel approach for the HAR based on acoustic data and similarity networks. In this approach, we were able to characterize the sound of the activities and identify those activities looking for similarity in the sound pattern. We evaluated the similarity of the sounds considering mainly two features: the sound location and the materials that were used. As a result, the materials are a good reference classifying the human activities compared with the location.

  7. Mathematical evaluation of similarity factor using various weighing approaches on aceclofenac marketed formulations by model-independent method.

    Science.gov (United States)

    Soni, T G; Desai, J U; Nagda, C D; Gandhi, T R; Chotai, N P

    2008-01-01

    The US Food and Drug Administration's (FDA's) guidance for industry on dissolution testing of immediate-release solid oral dosage forms describes that drug dissolution may be the rate limiting step for drug absorption in the case of low solubility/high permeability drugs (BCS class II drugs). US FDA Guidance describes the model-independent mathematical approach proposed by Moore and Flanner for calculating a similarity factor (f2) of dissolution across a suitable time interval. In the present study, the similarity factor was calculated on dissolution data of two marketed aceclofenac tablets (a BCS class II drug) using various weighing approaches proposed by Gohel et al. The proposed approaches were compared with a conventional approach (W = 1). On the basis of consideration of variability, preference is given in the order of approach 3 > approach 2 > approach 1 as approach 3 considers batch-to-batch as well as within-samples variability and shows best similarity profile. Approach 2 considers batch-to batch variability with higher specificity than approach 1.

  8. An electrophysiological signature of summed similarity in visual working memory

    NARCIS (Netherlands)

    Van Vugt, Marieke K.; Sekuler, Robert; Wilson, Hugh R.; Kahana, Michael J.

    Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory

  9. An Improved Valuation Model for Technology Companies

    Directory of Open Access Journals (Sweden)

    Ako Doffou

    2015-06-01

    Full Text Available This paper estimates some of the parameters of the Schwartz and Moon (2001 model using cross-sectional data. Stochastic costs, future financing, capital expenditures and depreciation are taken into account. Some special conditions are also set: the speed of adjustment parameters are equal; the implied half-life of the sales growth process is linked to analyst forecasts; and the risk-adjustment parameter is inferred from the company’s observed stock price beta. The model is illustrated in the valuation of Google, Amazon, eBay, Facebook and Yahoo. The improved model is far superior to the Schwartz and Moon (2001 model.

  10. Improving PSA quality of KSNP PSA model

    International Nuclear Information System (INIS)

    Yang, Joon Eon; Ha, Jae Joo

    2004-01-01

    In the RIR (Risk-informed Regulation), PSA (Probabilistic Safety Assessment) plays a major role because it provides overall risk insights for the regulatory body and utility. Therefore, the scope, the level of details and the technical adequacy of PSA, i.e. the quality of PSA is to be ensured for the successful RIR. To improve the quality of Korean PSA, we evaluate the quality of the KSNP (Korean Standard Nuclear Power Plant) internal full-power PSA model based on the 'ASME PRA Standard' and the 'NEI PRA Peer Review Process Guidance.' As a working group, PSA experts of the regulatory body and industry also participated in the evaluation process. It is finally judged that the overall quality of the KSNP PSA is between the ASME Standard Capability Category I and II. We also derive some items to be improved for upgrading the quality of the PSA up to the ASME Standard Capability Category II. In this paper, we show the result of quality evaluation, and the activities to improve the quality of the KSNP PSA model

  11. Using SQL Databases for Sequence Similarity Searching and Analysis.

    Science.gov (United States)

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  12. Infrared image background modeling based on improved Susan filtering

    Science.gov (United States)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  13. An Improved QTM Subdivision Model with Approximate Equal-area

    Directory of Open Access Journals (Sweden)

    ZHAO Xuesheng

    2016-01-01

    Full Text Available To overcome the defect of large area deformation in the traditional QTM subdivision model, an improved subdivision model is proposed which based on the “parallel method” and the thought of the equal area subdivision with changed-longitude-latitude. By adjusting the position of the parallel, this model ensures that the grid area between two adjacent parallels combined with no variation, so as to control area variation and variation accumulation of the QTM grid. The experimental results show that this improved model not only remains some advantages of the traditional QTM model(such as the simple calculation and the clear corresponding relationship with longitude/latitude grid, etc, but also has the following advantages: ①this improved model has a better convergence than the traditional one. The ratio of area_max/min finally converges to 1.38, far less than 1.73 of the “parallel method”; ②the grid units in middle and low latitude regions have small area variations and successive distributions; meanwhile, with the increase of subdivision level, the grid units with large variations gradually concentrate to the poles; ③the area variation of grid unit will not cumulate with the increasing of subdivision level.

  14. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  15. Modeling the angular motion dynamics of spacecraft with a magnetic attitude control system based on experimental studies and dynamic similarity

    Science.gov (United States)

    Kulkov, V. M.; Medvedskii, A. L.; Terentyev, V. V.; Firsyuk, S. O.; Shemyakov, A. O.

    2017-12-01

    The problem of spacecraft attitude control using electromagnetic systems interacting with the Earth's magnetic field is considered. A set of dimensionless parameters has been formed to investigate the spacecraft orientation regimes based on dynamically similar models. The results of experimental studies of small spacecraft with a magnetic attitude control system can be extrapolated to the in-orbit spacecraft motion control regimes by using the methods of the dimensional and similarity theory.

  16. Improving the physiological realism of experimental models.

    Science.gov (United States)

    Vinnakota, Kalyan C; Cha, Chae Y; Rorsman, Patrik; Balaban, Robert S; La Gerche, Andre; Wade-Martins, Richard; Beard, Daniel A; Jeneson, Jeroen A L

    2016-04-06

    The Virtual Physiological Human (VPH) project aims to develop integrative, explanatory and predictive computational models (C-Models) as numerical investigational tools to study disease, identify and design effective therapies and provide an in silico platform for drug screening. Ultimately, these models rely on the analysis and integration of experimental data. As such, the success of VPH depends on the availability of physiologically realistic experimental models (E-Models) of human organ function that can be parametrized to test the numerical models. Here, the current state of suitable E-models, ranging from in vitro non-human cell organelles to in vivo human organ systems, is discussed. Specifically, challenges and recent progress in improving the physiological realism of E-models that may benefit the VPH project are highlighted and discussed using examples from the field of research on cardiovascular disease, musculoskeletal disorders, diabetes and Parkinson's disease.

  17. Can model weighting improve probabilistic projections of climate change?

    Energy Technology Data Exchange (ETDEWEB)

    Raeisaenen, Jouni; Ylhaeisi, Jussi S. [Department of Physics, P.O. Box 48, University of Helsinki (Finland)

    2012-10-15

    Recently, Raeisaenen and co-authors proposed a weighting scheme in which the relationship between observable climate and climate change within a multi-model ensemble determines to what extent agreement with observations affects model weights in climate change projection. Within the Third Coupled Model Intercomparison Project (CMIP3) dataset, this scheme slightly improved the cross-validated accuracy of deterministic projections of temperature change. Here the same scheme is applied to probabilistic temperature change projection, under the strong limiting assumption that the CMIP3 ensemble spans the actual modeling uncertainty. Cross-validation suggests that probabilistic temperature change projections may also be improved by this weighting scheme. However, the improvement relative to uniform weighting is smaller in the tail-sensitive logarithmic score than in the continuous ranked probability score. The impact of the weighting on projection of real-world twenty-first century temperature change is modest in most parts of the world. However, in some areas mainly over the high-latitude oceans, the mean of the distribution is substantially changed and/or the distribution is considerably narrowed. The weights of individual models vary strongly with location, so that a model that receives nearly zero weight in some area may still get a large weight elsewhere. Although the details of this variation are method-specific, it suggests that the relative strengths of different models may be difficult to harness by weighting schemes that use spatially uniform model weights. (orig.)

  18. 3D Facial Similarity Measure Based on Geodesic Network and Curvatures

    Directory of Open Access Journals (Sweden)

    Junli Zhao

    2014-01-01

    Full Text Available Automated 3D facial similarity measure is a challenging and valuable research topic in anthropology and computer graphics. It is widely used in various fields, such as criminal investigation, kinship confirmation, and face recognition. This paper proposes a 3D facial similarity measure method based on a combination of geodesic and curvature features. Firstly, a geodesic network is generated for each face with geodesics and iso-geodesics determined and these network points are adopted as the correspondence across face models. Then, four metrics associated with curvatures, that is, the mean curvature, Gaussian curvature, shape index, and curvedness, are computed for each network point by using a weighted average of its neighborhood points. Finally, correlation coefficients according to these metrics are computed, respectively, as the similarity measures between two 3D face models. Experiments of different persons’ 3D facial models and different 3D facial models of the same person are implemented and compared with a subjective face similarity study. The results show that the geodesic network plays an important role in 3D facial similarity measure. The similarity measure defined by shape index is consistent with human’s subjective evaluation basically, and it can measure the 3D face similarity more objectively than the other indices.

  19. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  20. Applying a Global Sensitivity Analysis Workflow to Improve the Computational Efficiencies in Physiologically-Based Pharmacokinetic Modeling

    Directory of Open Access Journals (Sweden)

    Nan-Hung Hsieh

    2018-06-01

    efficiency without discernable changes in prediction accuracy or precision. We further found six previously fixed parameters that were actually influential to the model predictions. Adding these additional influential parameters improved the model performance beyond that of the original publication while maintaining similar computational efficiency. We conclude that GSA provides an objective, transparent, and reproducible approach to improve the performance and computational efficiency of PBPK models.

  1. On improving the communication between models and data.

    Science.gov (United States)

    Dietze, Michael C; Lebauer, David S; Kooper, Rob

    2013-09-01

    The potential for model-data synthesis is growing in importance as we enter an era of 'big data', greater connectivity and faster computation. Realizing this potential requires that the research community broaden its perspective about how and why they interact with models. Models can be viewed as scaffolds that allow data at different scales to inform each other through our understanding of underlying processes. Perceptions of relevance, accessibility and informatics are presented as the primary barriers to broader adoption of models by the community, while an inability to fully utilize the breadth of expertise and data from the community is a primary barrier to model improvement. Overall, we promote a community-based paradigm to model-data synthesis and highlight some of the tools and techniques that facilitate this approach. Scientific workflows address critical informatics issues in transparency, repeatability and automation, while intuitive, flexible web-based interfaces make running and visualizing models more accessible. Bayesian statistics provides powerful tools for assimilating a diversity of data types and for the analysis of uncertainty. Uncertainty analyses enable new measurements to target those processes most limiting our predictive ability. Moving forward, tools for information management and data assimilation need to be improved and made more accessible. © 2013 John Wiley & Sons Ltd.

  2. Application of the principle of similarity fluid mechanics

    International Nuclear Information System (INIS)

    Hendricks, R.C.; Sengers, J.V.

    1979-01-01

    Possible applications of the principle of similarity to fluid mechanics is described and illustrated. In correlating thermophysical properties of fluids, the similarity principle transcends the traditional corresponding states principle. In fluid mechanics the similarity principle is useful in correlating flow processes that can be modeled adequately with one independent variable (i.e., one-dimensional flows). In this paper we explore the concept of transforming the conservation equations by combining similarity principles for thermophysical properties with those for fluid flow. We illustrate the usefulness of the procedure by applying such a transformation to calculate two phase critical mass flow through a nozzle

  3. Bayesian Data Assimilation for Improved Modeling of Road Traffic

    NARCIS (Netherlands)

    Van Hinsbergen, C.P.Y.

    2010-01-01

    This thesis deals with the optimal use of existing models that predict certain phenomena of the road traffic system. Such models are extensively used in Advanced Traffic Information Systems (ATIS), Dynamic Traffic Management (DTM) or Model Predictive Control (MPC) approaches in order to improve the

  4. A similarity-based data warehousing environment for medical images.

    Science.gov (United States)

    Teixeira, Jefferson William; Annibal, Luana Peixoto; Felipe, Joaquim Cezar; Ciferri, Ricardo Rodrigues; Ciferri, Cristina Dutra de Aguiar

    2015-11-01

    A core issue of the decision-making process in the medical field is to support the execution of analytical (OLAP) similarity queries over images in data warehousing environments. In this paper, we focus on this issue. We propose imageDWE, a non-conventional data warehousing environment that enables the storage of intrinsic features taken from medical images in a data warehouse and supports OLAP similarity queries over them. To comply with this goal, we introduce the concept of perceptual layer, which is an abstraction used to represent an image dataset according to a given feature descriptor in order to enable similarity search. Based on this concept, we propose the imageDW, an extended data warehouse with dimension tables specifically designed to support one or more perceptual layers. We also detail how to build an imageDW and how to load image data into it. Furthermore, we show how to process OLAP similarity queries composed of a conventional predicate and a similarity search predicate that encompasses the specification of one or more perceptual layers. Moreover, we introduce an index technique to improve the OLAP query processing over images. We carried out performance tests over a data warehouse environment that consolidated medical images from exams of several modalities. The results demonstrated the feasibility and efficiency of our proposed imageDWE to manage images and to process OLAP similarity queries. The results also demonstrated that the use of the proposed index technique guaranteed a great improvement in query processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Short-term electricity price forecast based on the improved hybrid model

    International Nuclear Information System (INIS)

    Dong Yao; Wang Jianzhou; Jiang He; Wu Jie

    2011-01-01

    Highlights: → The proposed models can detach high volatility and daily seasonality of electricity price. → The improved hybrid forecast models can make full use of the advantages of individual models. → The proposed models create commendable improvements that are relatively satisfactorily for current research. → The proposed models do not require making complicated decisions about the explicit form. - Abstract: Half-hourly electricity price in power system are volatile, electricity price forecast is significant information which can help market managers and participants involved in electricity market to prepare their corresponding bidding strategies to maximize their benefits and utilities. However, the fluctuation of electricity price depends on the common effect of many factors and there is a very complicated random in its evolution process. Therefore, it is difficult to forecast half-hourly prices with traditional only one model for different behaviors of half-hourly prices. This paper proposes the improved forecasting model that detaches high volatility and daily seasonality for electricity price of New South Wales in Australia based on Empirical Mode Decomposition, Seasonal Adjustment and Autoregressive Integrated Moving Average. The prediction errors are analyzed and compared with the ones obtained from the traditional Seasonal Autoregressive Integrated Moving Average model. The comparisons demonstrate that the proposed model can improve the prediction accuracy noticeably.

  6. Short-term electricity price forecast based on the improved hybrid model

    Energy Technology Data Exchange (ETDEWEB)

    Dong Yao, E-mail: dongyao20051987@yahoo.cn [School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000 (China); Wang Jianzhou, E-mail: wjz@lzu.edu.cn [School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000 (China); Jiang He; Wu Jie [School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000 (China)

    2011-08-15

    Highlights: {yields} The proposed models can detach high volatility and daily seasonality of electricity price. {yields} The improved hybrid forecast models can make full use of the advantages of individual models. {yields} The proposed models create commendable improvements that are relatively satisfactorily for current research. {yields} The proposed models do not require making complicated decisions about the explicit form. - Abstract: Half-hourly electricity price in power system are volatile, electricity price forecast is significant information which can help market managers and participants involved in electricity market to prepare their corresponding bidding strategies to maximize their benefits and utilities. However, the fluctuation of electricity price depends on the common effect of many factors and there is a very complicated random in its evolution process. Therefore, it is difficult to forecast half-hourly prices with traditional only one model for different behaviors of half-hourly prices. This paper proposes the improved forecasting model that detaches high volatility and daily seasonality for electricity price of New South Wales in Australia based on Empirical Mode Decomposition, Seasonal Adjustment and Autoregressive Integrated Moving Average. The prediction errors are analyzed and compared with the ones obtained from the traditional Seasonal Autoregressive Integrated Moving Average model. The comparisons demonstrate that the proposed model can improve the prediction accuracy noticeably.

  7. A model for continuous improvement at a South African minerals benefication plant

    Directory of Open Access Journals (Sweden)

    Ras, Eugene Ras

    2015-05-01

    Full Text Available South Africa has a variety of mineral resources, and several minerals beneficiation plants are currently in operation. These plants must be operated effectively to ensure that the end-users of its products remain internationally competitive. To achieve this objective, plants need a sustainable continuous improvement programme. Several frameworks for continuous improvement are used, with variable success rates, in beneficiation plants around the world. However, none of these models specifically addresses continuous improvement from a minerals-processing point of view. The objective of this research study was to determine which factors are important for a continuous improvement model at a minerals beneficiation plant, and to propose a new model using lean manufacturing, six sigma, and the theory of constraints. A survey indicated that managers in the industry prefer a model that combines various continuous improvement models.

  8. Universal self-similarity of propagating populations.

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2010-07-01

    This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d-dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common--yet arbitrary--motion pattern; each particle has its own random propagation parameters--emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles' displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles' underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.

  9. Universal self-similarity of propagating populations

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2010-07-01

    This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d -dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common—yet arbitrary—motion pattern; each particle has its own random propagation parameters—emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles’ displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles’ underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.

  10. An analytic solution of a model of language competition with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Otero-Espinar, M. V.; Seoane, L. F.; Nieto, J. J.; Mira, J.

    2013-12-01

    An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also suggest ways to further improve the model and facilitate the comparison of its consequences with those from other approaches or with real data.

  11. Improvement of blow down model for LEAP code

    International Nuclear Information System (INIS)

    Itooka, Satoshi; Fujimata, Kazuhiro

    2003-03-01

    In Japan Nuclear Cycle Development Institute, the improvement of analysis method for overheating tube rapture was studied for the accident of sodium-water reactions in the steam generator of a fast breeder reactor and the evaluation of heat transfer condition in the tube were carried out based on study of critical heat flux (CHF) and post-CHF heat transfer equation in Light Water Reactors. In this study, the improvement of blow down model for the LEAP code was carried out taking into consideration the above-mentioned evaluation of heat transfer condition. Improvements of the LEAP code were following items. Calculations and verification were performed with the improved LEAP code in order to confirm the code functions. The addition of critical heat flux (CHF) by the formula of Katto and the formula of Tong. The addition of post-CHF heat transfer equation by the formula of Condie-BengstonIV and the formula of Groeneveld 5.9. The physical properties of the water and steam are expanded to the critical conditions of the water. The expansion of the total number of section and the improvement of the input form. The addition of the function to control the valve setting by the PID control model. (author)

  12. Idealness and similarity in goal-derived categories: a computational examination.

    Science.gov (United States)

    Voorspoels, Wouter; Storms, Gert; Vanpaemel, Wolf

    2013-02-01

    The finding that the typicality gradient in goal-derived categories is mainly driven by ideals rather than by exemplar similarity has stood uncontested for nearly three decades. Due to the rather rigid earlier implementations of similarity, a key question has remained--that is, whether a more flexible approach to similarity would alter the conclusions. In the present study, we evaluated whether a similarity-based approach that allows for dimensional weighting could account for findings in goal-derived categories. To this end, we compared a computational model of exemplar similarity (the generalized context model; Nosofsky, Journal of Experimental Psychology. General 115:39-57, 1986) and a computational model of ideal representation (the ideal-dimension model; Voorspoels, Vanpaemel, & Storms, Psychonomic Bulletin & Review 18:1006-114, 2011) in their accounts of exemplar typicality in ten goal-derived categories. In terms of both goodness-of-fit and generalizability, we found strong evidence for an ideal approach in nearly all categories. We conclude that focusing on a limited set of features is necessary but not sufficient to account for the observed typicality gradient. A second aspect of ideal representations--that is, that extreme rather than common, central-tendency values drive typicality--seems to be crucial.

  13. Improved heat transfer modeling of the eye for electromagnetic wave exposures.

    Science.gov (United States)

    Hirata, Akimasa

    2007-05-01

    This study proposed an improved heat transfer model of the eye for exposure to electromagnetic (EM) waves. Particular attention was paid to the difference from the simplified heat transfer model commonly used in this field. From our computational results, the temperature elevation in the eye calculated with the simplified heat transfer model was largely influenced by the EM absorption outside the eyeball, but not when we used our improved model.

  14. Improved dust representation in the Community Atmosphere Model

    Science.gov (United States)

    Albani, S.; Mahowald, N. M.; Perry, A. T.; Scanza, R. A.; Zender, C. S.; Heavens, N. G.; Maggi, V.; Kok, J. F.; Otto-Bliesner, B. L.

    2014-09-01

    Aerosol-climate interactions constitute one of the major sources of uncertainty in assessing changes in aerosol forcing in the anthropocene as well as understanding glacial-interglacial cycles. Here we focus on improving the representation of mineral dust in the Community Atmosphere Model and assessing the impacts of the improvements in terms of direct effects on the radiative balance of the atmosphere. We simulated the dust cycle using different parameterization sets for dust emission, size distribution, and optical properties. Comparing the results of these simulations with observations of concentration, deposition, and aerosol optical depth allows us to refine the representation of the dust cycle and its climate impacts. We propose a tuning method for dust parameterizations to allow the dust module to work across the wide variety of parameter settings which can be used within the Community Atmosphere Model. Our results include a better representation of the dust cycle, most notably for the improved size distribution. The estimated net top of atmosphere direct dust radiative forcing is -0.23 ± 0.14 W/m2 for present day and -0.32 ± 0.20 W/m2 at the Last Glacial Maximum. From our study and sensitivity tests, we also derive some general relevant findings, supporting the concept that the magnitude of the modeled dust cycle is sensitive to the observational data sets and size distribution chosen to constrain the model as well as the meteorological forcing data, even within the same modeling framework, and that the direct radiative forcing of dust is strongly sensitive to the optical properties and size distribution used.

  15. Titan I propulsion system modeling and possible performance improvements

    Science.gov (United States)

    Giusti, Oreste

    This thesis features the Titan I propulsion systems and offers data-supported suggestions for improvements to increase performance. The original propulsion systems were modeled both graphically in CAD and via equations. Due to the limited availability of published information, it was necessary to create a more detailed, secondary set of models. Various engineering equations---pertinent to rocket engine design---were implemented in order to generate the desired extra detail. This study describes how these new models were then imported into the ESI CFD Suite. Various parameters are applied to these imported models as inputs that include, for example, bi-propellant combinations, pressure, temperatures, and mass flow rates. The results were then processed with ESI VIEW, which is visualization software. The output files were analyzed for forces in the nozzle, and various results were generated, including sea level thrust and ISP. Experimental data are provided to compare the original engine configuration models to the derivative suggested improvement models.

  16. Distributional Similarity for Chinese: Exploiting Characters and Radicals

    Directory of Open Access Journals (Sweden)

    Peng Jin

    2012-01-01

    Full Text Available Distributional Similarity has attracted considerable attention in the field of natural language processing as an automatic means of countering the ubiquitous problem of sparse data. As a logographic language, Chinese words consist of characters and each of them is composed of one or more radicals. The meanings of characters are usually highly related to the words which contain them. Likewise, radicals often make a predictable contribution to the meaning of a character: characters that have the same components tend to have similar or related meanings. In this paper, we utilize these properties of the Chinese language to improve Chinese word similarity computation. Given a content word, we first extract similar words based on a large corpus and a similarity score for ranking. This rank is then adjusted according to the characters and components shared between the similar word and the target word. Experiments on two gold standard datasets show that the adjusted rank is superior and closer to human judgments than the original rank. In addition to quantitative evaluation, we examine the reasons behind errors drawing on linguistic phenomena for our explanations.

  17. Improving activity transport models for water-cooled nuclear power reactors

    Energy Technology Data Exchange (ETDEWEB)

    Burrill, K.A

    2001-08-01

    Eight current models for describing radioactivity transport and radiation field growth around water-cooled nuclear power reactors have been reviewed and assessed. A frequent failing of the models is the arbitrary nature of the determination of the important processes. Nearly all modelers agree that the kinetics of deposition and release of both dissolved and particulate material must be described. Plant data must be used to guide the selection and development of suitable improved models, with a minimum of empirically-based rate constraints being used. Limiting case modelling based on experimental data is suggested as a way to simplify current models and remove their subjectivity. Improved models must consider the recent change to 'coordinated water chemistry' that appears to produce normal solubility behaviour for dissolved iron throughout the fuel cycle in PWRs, but retrograde solubility remains for dissolved nickel. Profiles are suggested for dissolved iron and nickel concentrations around the heat transport system in CANDU reactors, which operate nominally at constant chemistry, i.e., pH{sub T} constant with time, and which use carbon steel isothermal piping. These diagrams are modified for a CANDU reactor with stainless steel piping, in order to show the changes expected. The significance of these profiles for transport in PWRs is discussed for further model improvement. (author)

  18. Correlation between social proximity and mobility similarity.

    Science.gov (United States)

    Fan, Chao; Liu, Yiding; Huang, Junming; Rong, Zhihai; Zhou, Tao

    2017-09-20

    Human behaviors exhibit ubiquitous correlations in many aspects, such as individual and collective levels, temporal and spatial dimensions, content, social and geographical layers. With rich Internet data of online behaviors becoming available, it attracts academic interests to explore human mobility similarity from the perspective of social network proximity. Existent analysis shows a strong correlation between online social proximity and offline mobility similarity, namely, mobile records between friends are significantly more similar than between strangers, and those between friends with common neighbors are even more similar. We argue the importance of the number and diversity of common friends, with a counter intuitive finding that the number of common friends has no positive impact on mobility similarity while the diversity plays a key role, disagreeing with previous studies. Our analysis provides a novel view for better understanding the coupling between human online and offline behaviors, and will help model and predict human behaviors based on social proximity.

  19. Extension of frequency-based dissimilarity for retrieving similar plasma waveforms

    International Nuclear Information System (INIS)

    Hochin, Teruhisa; Koyama, Katsumasa; Nakanishi, Hideya; Kojima, Mamoru

    2008-01-01

    Some computer-aided assistance in finding the waveforms similar to a waveform has become indispensable for accelerating data analysis in the plasma experiments. For the slowly-varying waveforms and those having time-sectional oscillation patterns, the methods using the Fourier series coefficients of waveforms in calculating the dissimilarity have successfully improved the performance in retrieving similar waveforms. This paper treats severely-varying waveforms, and proposes two extensions to the dissimilarity of waveforms. The first extension is to capture the difference of the importance of the Fourier series coefficients of waveforms against frequency. The second extension is to consider the outlines of waveforms. The correctness of the extended dissimilarity is experimentally evaluated by using the metrics used in evaluating that of the information retrieval, i.e. precision and recall. The experimental results show that the extended dissimilarity could improve the correctness of the similarity retrieval of plasma waveforms

  20. Forecasting China’s Annual Biofuel Production Using an Improved Grey Model

    Directory of Open Access Journals (Sweden)

    Nana Geng

    2015-10-01

    Full Text Available Biofuel production in China suffers from many uncertainties due to concerns about the government’s support policy and supply of biofuel raw material. Predicting biofuel production is critical to the development of this energy industry. Depending on the biofuel’s characteristics, we improve the prediction precision of the conventional prediction method by creating a dynamic fuzzy grey–Markov prediction model. Our model divides random time series decomposition into a change trend sequence and a fluctuation sequence. It comprises two improvements. We overcome the problem of considering the status of future time from a static angle in the traditional grey model by using the grey equal dimension new information and equal dimension increasing models to create a dynamic grey prediction model. To resolve the influence of random fluctuation data and weak anti-interference ability in the Markov chain model, we improve the traditional grey–Markov model with classification of states using the fuzzy set theory. Finally, we use real data to test the dynamic fuzzy prediction model. The results prove that the model can effectively improve the accuracy of forecast data and can be applied to predict biofuel production. However, there are still some defects in our model. The modeling approach used here predicts biofuel production levels based upon past production levels dictated by economics, governmental policies, and technological developments but none of which can be forecast accurately based upon past events.

  1. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    Science.gov (United States)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  2. An improved model for the Earth's gravity field

    Science.gov (United States)

    Tapley, B. D.; Shum, C. K.; Yuan, D. N.; Ries, J. C.; Schutz, B. E.

    1989-01-01

    An improved model for the Earth's gravity field, TEG-1, was determined using data sets from fourteen satellites, spanning the inclination ranges from 15 to 115 deg, and global surface gravity anomaly data. The satellite measurements include laser ranging data, Doppler range-rate data, and satellite-to-ocean radar altimeter data measurements, which include the direct height measurement and the differenced measurements at ground track crossings (crossover measurements). Also determined was another gravity field model, TEG-1S, which included all the data sets in TEG-1 with the exception of direct altimeter data. The effort has included an intense scrutiny of the gravity field solution methodology. The estimated parameters included geopotential coefficients complete to degree and order 50 with selected higher order coefficients, ocean and solid Earth tide parameters, Doppler tracking station coordinates and the quasi-stationary sea surface topography. Extensive error analysis and calibration of the formal covariance matrix indicate that the gravity field model is a significant improvement over previous models and can be used for general applications in geodesy.

  3. A study of the predictive model on the user reaction time using the information amount and similarity

    International Nuclear Information System (INIS)

    Lee, Sungjin; Heo, Gyunyoung; Chang, S.H.

    2004-01-01

    Human operations through a user interface are divided into two types. The one is the single operation that is performed on a static interface. The other is the sequential operation that achieves a goal by handling several displays through operator's navigation in the crt-based console. Sequential operation has similar meaning with continuous task. Most operations in recently developed computer applications correspond to the sequential operation, and the single operation can be considered as a part of the sequential operation. In the area of HCI (human computer interaction) evaluation, the Hick-Hyman law counts as the most powerful theory. The most important factor in the equation of Hick-Hyman law about choice reaction time is the quantified amount of information conveyed by a statement, stimulus, or event. Generally, we can expect that if there are some similarities between a series of interfaces, human operator is able to use his attention resource effectively. That is the performance of human operator is increased by the similarity. The similarity may be able to affect the allocation of attention resource based on separate STSS (short-term sensory store) and long-term memory. There are theories related with this concept, which are task switching paradigm and the law of practice. However, it is not easy to explain the human operator performance with only the similarity or the information amount. There are few theories to explain the performance with the combination of the similarity and the information amount. The objective of this paper is to purpose and validate the quantitative and predictive model on the user reaction time in CRT-based displays. Another objective is to validate various theories related with human cognition and perception, which are Hick-Hyman law and the law of practice as representative theories. (author)

  4. More Similar than Different? Exploring Cultural Models of Depression among Latino Immigrants in Florida

    Directory of Open Access Journals (Sweden)

    Dinorah (Dina Martinez Tyson

    2011-01-01

    Full Text Available The Surgeon General's report, “Culture, Race, and Ethnicity: A Supplement to Mental Health,” points to the need for subgroup specific mental health research that explores the cultural variation and heterogeneity of the Latino population. Guided by cognitive anthropological theories of culture, we utilized ethnographic interviewing techniques to explore cultural models of depression among foreign-born Mexican (n=30, Cuban (n=30, Columbian (n=30, and island-born Puerto Ricans (n=30, who represent the largest Latino groups in Florida. Results indicate that Colombian, Cuban, Mexican, and Puerto Rican immigrants showed strong intragroup consensus in their models of depression causality, symptoms, and treatment. We found more agreement than disagreement among all four groups regarding core descriptions of depression, which was largely unexpected but can potentially be explained by their common immigrant experiences. Findings expand our understanding about Latino subgroup similarities and differences in their conceptualization of depression and can be used to inform the adaptation of culturally relevant interventions in order to better serve Latino immigrant communities.

  5. Improved dual sided doped memristor: modelling and applications

    Directory of Open Access Journals (Sweden)

    Anup Shrivastava

    2014-05-01

    Full Text Available Memristor as a novel and emerging electronic device having vast range of applications suffer from poor frequency response and saturation length. In this paper, the authors present a novel and an innovative device structure for the memristor with two active layers and its non-linear ionic drift model for an improved frequency response and saturation length. The authors investigated and compared the I–V characteristics for the proposed model with the conventional memristors and found better results in each case (different window functions for the proposed dual sided doped memristor. For circuit level simulation, they developed a SPICE model of the proposed memristor and designed some logic gates based on hybrid complementary metal oxide semiconductor memristive logic (memristor ratioed logic. The proposed memristor yields improved results in terms of noise margin, delay time and dynamic hazards than that of the conventional memristors (single active layer memristors.

  6. Process-Improvement Cost Model for the Emergency Department.

    Science.gov (United States)

    Dyas, Sheila R; Greenfield, Eric; Messimer, Sherri; Thotakura, Swati; Gholston, Sampson; Doughty, Tracy; Hays, Mary; Ivey, Richard; Spalding, Joseph; Phillips, Robin

    2015-01-01

    The objective of this report is to present a simplified, activity-based costing approach for hospital emergency departments (EDs) to use with Lean Six Sigma cost-benefit analyses. The cost model complexity is reduced by removing diagnostic and condition-specific costs, thereby revealing the underlying process activities' cost inefficiencies. Examples are provided for evaluating the cost savings from reducing discharge delays and the cost impact of keeping patients in the ED (boarding) after the decision to admit has been made. The process-improvement cost model provides a needed tool in selecting, prioritizing, and validating Lean process-improvement projects in the ED and other areas of patient care that involve multiple dissimilar diagnoses.

  7. A participatory model for improving occupational health and safety: improving informal sector working conditions in Thailand.

    Science.gov (United States)

    Manothum, Aniruth; Rukijkanpanich, Jittra; Thawesaengskulthai, Damrong; Thampitakkul, Boonwa; Chaikittiporn, Chalermchai; Arphorn, Sara

    2009-01-01

    The purpose of this study was to evaluate the implementation of an Occupational Health and Safety Management Model for informal sector workers in Thailand. The studied model was characterized by participatory approaches to preliminary assessment, observation of informal business practices, group discussion and participation, and the use of environmental measurements and samples. This model consisted of four processes: capacity building, risk analysis, problem solving, and monitoring and control. The participants consisted of four local labor groups from different regions, including wood carving, hand-weaving, artificial flower making, and batik processing workers. The results demonstrated that, as a result of applying the model, the working conditions of the informal sector workers had improved to meet necessary standards. This model encouraged the use of local networks, which led to cooperation within the groups to create appropriate technologies to solve their problems. The authors suggest that this model could effectively be applied elsewhere to improve informal sector working conditions on a broader scale.

  8. Establishing an Improved Kane Dynamic Model for the 7-DOF Reconfigurable Modular Robot

    Directory of Open Access Journals (Sweden)

    Xiao Li

    2017-01-01

    Full Text Available We propose an improved Kane dynamic model theory for the 7-DOF modular robot in this paper, and the model precision is improved by the improved function T′it. We designed three types of progressive modular joints for reconfigurable modular robot that can be used in industrial robot, space robot, and special robot. The Kane dynamic model and the solid dynamic model are established, respectively, for the 7-DOF modular robot. After that, the experimental results are obtained from the simulation experiment of typical task in the established dynamic models. By the analysis model of error, the equation of the improved torque T′it is derived and proposed. And the improved Kane dynamic model is established for the modular robot that used T′it. Based on the experimental data, the undetermined coefficient matrix is five-order linear that was proved in 7-DOF modular robot. And the explicit formulation is solved of the Kane dynamic model and can be used in control system.

  9. New Genome Similarity Measures based on Conserved Gene Adjacencies.

    Science.gov (United States)

    Doerr, Daniel; Kowada, Luis Antonio B; Araujo, Eloi; Deshpande, Shachi; Dantas, Simone; Moret, Bernard M E; Stoye, Jens

    2017-06-01

    Many important questions in molecular biology, evolution, and biomedicine can be addressed by comparative genomic approaches. One of the basic tasks when comparing genomes is the definition of measures of similarity (or dissimilarity) between two genomes, for example, to elucidate the phylogenetic relationships between species. The power of different genome comparison methods varies with the underlying formal model of a genome. The simplest models impose the strong restriction that each genome under study must contain the same genes, each in exactly one copy. More realistic models allow several copies of a gene in a genome. One speaks of gene families, and comparative genomic methods that allow this kind of input are called gene family-based. The most powerful-but also most complex-models avoid this preprocessing of the input data and instead integrate the family assignment within the comparative analysis. Such methods are called gene family-free. In this article, we study an intermediate approach between family-based and family-free genomic similarity measures. Introducing this simpler model, called gene connections, we focus on the combinatorial aspects of gene family-free genome comparison. While in most cases, the computational costs to the general family-free case are the same, we also find an instance where the gene connections model has lower complexity. Within the gene connections model, we define three variants of genomic similarity measures that have different expression powers. We give polynomial-time algorithms for two of them, while we show NP-hardness for the third, most powerful one. We also generalize the measures and algorithms to make them more robust against recent local disruptions in gene order. Our theoretical findings are supported by experimental results, proving the applicability and performance of our newly defined similarity measures.

  10. Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.

    Directory of Open Access Journals (Sweden)

    Octavio Miramontes

    Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.

  11. Improved Model for Depth Bias Correction in Airborne LiDAR Bathymetry Systems

    Directory of Open Access Journals (Sweden)

    Jianhu Zhao

    2017-07-01

    Full Text Available Airborne LiDAR bathymetry (ALB is efficient and cost effective in obtaining shallow water topography, but often produces a low-accuracy sounding solution due to the effects of ALB measurements and ocean hydrological parameters. In bathymetry estimates, peak shifting of the green bottom return caused by pulse stretching induces depth bias, which is the largest error source in ALB depth measurements. The traditional depth bias model is often applied to reduce the depth bias, but it is insufficient when used with various ALB system parameters and ocean environments. Therefore, an accurate model that considers all of the influencing factors must be established. In this study, an improved depth bias model is developed through stepwise regression in consideration of the water depth, laser beam scanning angle, sensor height, and suspended sediment concentration. The proposed improved model and a traditional one are used in an experiment. The results show that the systematic deviation of depth bias corrected by the traditional and improved models is reduced significantly. Standard deviations of 0.086 and 0.055 m are obtained with the traditional and improved models, respectively. The accuracy of the ALB-derived depth corrected by the improved model is better than that corrected by the traditional model.

  12. Basic study of the plant maintenance model considering plant improvement/modification

    International Nuclear Information System (INIS)

    Tsumaya, Akira; Inoue, Kazuya; Mochizuki, Masahito; Wakamatsu, Hidefumi; Arai, Eiji

    2007-01-01

    This paper proposes a maintenance activity model that considers not only routine maintenance activity but also functional maintenance including improvement/modification. Required maintenance types are categorized, and limitation of Activity Domain Integration Diagram (ADID) proposed by ISO18435 are discussed based on framework for life cycle maintenance management of manufacturing assets. Then, we proposed extension ADID model for plant maintenance activity model considering functional improvement/modification. (author)

  13. Improved model for solar heating of buildings

    OpenAIRE

    Lie, Bernt

    2015-01-01

    A considerable future increase in the global energy use is expected, and the effects of energy conversion on the climate are already observed. Future energy conversion should thus be based on resources that have negligible climate effects; solar energy is perhaps the most important of such resources. The presented work builds on a previous complete model for solar heating of a house; here the aim to introduce ventilation heat recovery and improve on the hot water storage model. Ventilation he...

  14. Towards a chromatographic similarity index to establish localised quantitative structure-retention relationships for retention prediction. II Use of Tanimoto similarity index in ion chromatography.

    Science.gov (United States)

    Park, Soo Hyun; Talebi, Mohammad; Amos, Ruth I J; Tyteca, Eva; Haddad, Paul R; Szucs, Roman; Pohl, Christopher A; Dolan, John W

    2017-11-10

    Quantitative Structure-Retention Relationships (QSRR) are used to predict retention times of compounds based only on their chemical structures encoded by molecular descriptors. The main concern in QSRR modelling is to build models with high predictive power, allowing reliable retention prediction for the unknown compounds across the chromatographic space. With the aim of enhancing the prediction power of the models, in this work, our previously proposed QSRR modelling approach called "federation of local models" is extended in ion chromatography to predict retention times of unknown ions, where a local model for each target ion (unknown) is created using only structurally similar ions from the dataset. A Tanimoto similarity (TS) score was utilised as a measure of structural similarity and training sets were developed by including ions that were similar to the target ion, as defined by a threshold value. The prediction of retention parameters (a- and b-values) in the linear solvent strength (LSS) model in ion chromatography, log k=a - blog[eluent], allows the prediction of retention times under all eluent concentrations. The QSRR models for a- and b-values were developed by a genetic algorithm-partial least squares method using the retention data of inorganic and small organic anions and larger organic cations (molecular mass up to 507) on four Thermo Fisher Scientific columns (AS20, AS19, AS11HC and CS17). The corresponding predicted retention times were calculated by fitting the predicted a- and b-values of the models into the LSS model equation. The predicted retention times were also plotted against the experimental values to evaluate the goodness of fit and the predictive power of the models. The application of a TS threshold of 0.6 was found to successfully produce predictive and reliable QSRR models (Q ext(F2) 2 >0.8 and Mean Absolute Error<0.1), and hence accurate retention time predictions with an average Mean Absolute Error of 0.2min. Crown Copyright

  15. On finding similar items in a stream of transactions

    DEFF Research Database (Denmark)

    Campagna, Andrea; Pagh, Rasmus

    2010-01-01

    While there has been a lot of work on finding frequent itemsets in transaction data streams, none of these solve the problem of finding similar pairs according to standard similarity measures. This paper is a first attempt at dealing with this, arguably more important, problem. We start out with ...... in random order, and show that surprisingly, not only is small-space similarity mining possible for the most common similarity measures, but the mining accuracy {\\em improves\\/} with the length of the stream for any fixed support threshold....... with a negative result that also explains the lack of theoretical upper bounds on the space usage of data mining algorithms for finding frequent itemsets: Any algorithm that (even only approximately and with a chance of error) finds the most frequent $k$-itemset must use space $\\Omega...

  16. PROPRIEDADES TERMOFÍSICAS DE SOLUÇÕES MODELO SIMILARES A CREME DE LEITE THERMOPHYSICAL PROPERTIES OF MODEL SOLUTIONS SIMILAR TO CREAM

    Directory of Open Access Journals (Sweden)

    Silvia Cristina Sobottka Rolim de MOURA

    2001-08-01

    Full Text Available A demanda de creme de leite UHT tem aumentado significativamente. Diversas empresas diversificaram e aumentaram sua produção, visto que o consumidor, cada vez mais exigente, almeja cremes com ampla faixa de teor de gordura. O objetivo do presente trabalho foi determinar a densidade, viscosidade aparente e difusividade térmica, de soluções modelo similares a creme de leite, na faixa de temperatura de 30 a 70°C, estudando a influência do teor de gordura e da temperatura nas propriedades físicas dos produtos. O delineamento estatístico aplicado foi o planejamento 3X5, usando níveis de teor de gordura e temperatura fixos em 15%, 25% e 35%; 30°C, 40°C, 50°C, 60°C e 70°C, respectivamente (STATISTICA 6.0. Manteve-se constante a quantidade de carboidrato e de proteína, ambos em 3%. A densidade foi determinada pelo método de deslocamento de fluidos em picnômetro; a difusividade térmica com base no método de Dickerson e a viscosidade aparente foi determinada em reômetro Rheotest 2.1. Os resultados de cada propriedade foram analisados através de método de superfície de resposta. No caso destas propriedades, os dados obtidos apresentaram resultados significativos, indicando que o modelo representou de forma confiável a variação destas propriedades com a variação da gordura (% e da temperatura (°C.The requirement of UHT cream has been increased considerably. Several industries varied and increased their production, since consumers, more and more exigent, are demanding creams with a wide range of fat content. The objective of the present research was to determine the density, viscosity and thermal diffusivity of model solutions similar to cream. The range of temperature varied from 30°C to 70°C in order to study the influence of fat content and temperature in the physical properties of cream. The statistic method applied was the factorial 3X5 planning, with levels of fat content and temperature fixed in 15%, 25% and 35%; 30

  17. Voxel inversion of airborne electromagnetic data for improved model integration

    Science.gov (United States)

    Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders

    2014-05-01

    spatially constrained 1D models with 29 layers. For comparison, the SCI inversion models have been gridded on the same grid of the voxel inversion. The new voxel inversion and the classic SCI give similar data fit and inversion models. The voxel inversion decouples the geophysical model from the position of acquired data, and at the same time fits the data as well as the classic SCI inversion. Compared to the classic approach, the voxel inversion is better suited for informing directly (hydro)geological models and for sequential/Joint/Coupled (hydro)geological inversion. We believe that this new approach will facilitate the integration of geophysics, geology and hydrology for improved groundwater and environmental management.

  18. Two-halo term in stacked thermal Sunyaev-Zel'dovich measurements: Implications for self-similarity

    Science.gov (United States)

    Hill, J. Colin; Baxter, Eric J.; Lidz, Adam; Greco, Johnny P.; Jain, Bhuvnesh

    2018-04-01

    The relation between the mass and integrated electron pressure of galaxy group and cluster halos can be probed by stacking maps of the thermal Sunyaev-Zel'dovich (tSZ) effect. Perhaps surprisingly, recent observational results have indicated that the scaling relation between integrated pressure and mass follows the prediction of simple, self-similar models down to halo masses as low as 1 012.5 M⊙ . Hydrodynamical simulations that incorporate energetic feedback processes suggest that gas should be depleted from such low-mass halos, thus decreasing their tSZ signal relative to self-similar predictions. Here, we build on the modeling of V. Vikram, A. Lidz, and B. Jain, Mon. Not. R. Astron. Soc. 467, 2315 (2017), 10.1093/mnras/stw3311 to evaluate the bias in the interpretation of stacked tSZ measurements due to the signal from correlated halos (the "two-halo" term), which has generally been neglected in the literature. We fit theoretical models to a measurement of the tSZ-galaxy group cross-correlation function, accounting explicitly for the one- and two-halo contributions. We find moderate evidence of a deviation from self-similarity in the pressure-mass relation, even after marginalizing over conservative miscentering effects. We explore pressure-mass models with a break at 1 014 M⊙, as well as other variants. We discuss and test for sources of uncertainty in our analysis, in particular a possible bias in the halo mass estimates and the coarse resolution of the Planck beam. We compare our findings with earlier analyses by exploring the extent to which halo isolation criteria can reduce the two-halo contribution. Finally, we show that ongoing third-generation cosmic microwave background experiments will explicitly resolve the one-halo term in low-mass groups; our methodology can be applied to these upcoming data sets to obtain a clear answer to the question of self-similarity and an improved understanding of hot gas in low-mass halos.

  19. Effect of similar elements on improving glass-forming ability of La-Ce-based alloys

    International Nuclear Information System (INIS)

    Zhang Tao; Li Ran; Pang Shujie

    2009-01-01

    To date the effect of unlike component elements on glass-forming ability (GFA) of alloys have been studied extensively, and it is generally recognized that the main consisting elements of the alloys with high GFA usually have large difference in atomic size and atomic interaction (large negative heat of mixing) among them. In our recent work, a series of rare earth metal-based alloy compositions with superior GFA were found through the approach of coexistence of similar constituent elements. The quinary (La 0.5 Ce 0.5 ) 65 Al 10 (Co 0.6 Cu 0.4 ) 25 bulk metallic glass (BMG) in a rod form with a diameter up to 32 mm was synthesized by tilt-pour casting, for which the glass-forming ability is significantly higher than that for ternary Ln-Al-TM alloys (Ln = La or Ce; TM = Co or Cu) with critical diameters for glass-formation of several millimeters. We suggest that the strong frustration of crystallization by utilizing the coexistence of La-Ce and Co-Cu to complicate competing crystalline phases is helpful to construct BMG component with superior GFA. The results of our present work indicate that similar elements (elements with similar atomic size and chemical properties) have significant effect on GFA of alloys.

  20. Approximate self-similarity in models of geological folding

    NARCIS (Netherlands)

    Budd, C.J.; Peletier, M.A.

    2000-01-01

    We propose a model for the folding of rock under the compression of tectonic plates. This models an elastic rock layer imbedded in a viscous foundation by a fourth-order parabolic equation with a nonlinear constraint. The large-time behavior of solutions of this problem is examined and found to be

  1. School Improvement Model to Foster Student Learning

    Science.gov (United States)

    Rulloda, Rudolfo Barcena

    2011-01-01

    Many classroom teachers are still using the traditional teaching methods. The traditional teaching methods are one-way learning process, where teachers would introduce subject contents such as language arts, English, mathematics, science, and reading separately. However, the school improvement model takes into account that all students have…

  2. An improved Corten-Dolan's model based on damage and stress state effects

    International Nuclear Information System (INIS)

    Gao, Huiying; Huang, Hong Zhong; Lv, Zhiqiang; Zuo, Fang Jun; Wang, Hai Kun

    2015-01-01

    The value of exponent d in Corten-Dolan's model is generally considered to be a constant. Nonetheless, the results predicted on the basis of this statement deviate significantly from the real values. In consideration of the effects of damage and stress state on fatigue life prediction, Corten-Dolan's model is improved by redefining the exponent d used in the traditional model. The improved model performs better than the traditional one with respect to the demonstration of a fatigue failure mechanism. Predictions of fatigue life on the basis of investigations into three metallic specimens indicate that the errors caused by the improved model are significantly smaller than those induced by the traditional model. Meanwhile, predictions derived according to the improved model fall into a narrower dispersion zone than those made as per Miner's rule and the traditional model. This finding suggests that the proposed model improves the life prediction accuracy of the other two models. The predictions obtained using the improved Corten-Dolan's model differ slightly from those derived according to a model proposed in previous literature; a few life predictions obtained on the basis of the former are more accurate than those derived according to the latter. Therefore, the improved model proposed in this paper is proven to be rational and reliable given the proven validity of the existing model. Therefore, the improved model can be feasibly and credibly applied to damage accumulation and fatigue life prediction to some extent.

  3. An improved Corten-Dolan's model based on damage and stress state effects

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Huiying; Huang, Hong Zhong; Lv, Zhiqiang; Zuo, Fang Jun; Wang, Hai Kun [University of Electronic Science and Technology of China, Chengdu (China)

    2015-08-15

    The value of exponent d in Corten-Dolan's model is generally considered to be a constant. Nonetheless, the results predicted on the basis of this statement deviate significantly from the real values. In consideration of the effects of damage and stress state on fatigue life prediction, Corten-Dolan's model is improved by redefining the exponent d used in the traditional model. The improved model performs better than the traditional one with respect to the demonstration of a fatigue failure mechanism. Predictions of fatigue life on the basis of investigations into three metallic specimens indicate that the errors caused by the improved model are significantly smaller than those induced by the traditional model. Meanwhile, predictions derived according to the improved model fall into a narrower dispersion zone than those made as per Miner's rule and the traditional model. This finding suggests that the proposed model improves the life prediction accuracy of the other two models. The predictions obtained using the improved Corten-Dolan's model differ slightly from those derived according to a model proposed in previous literature; a few life predictions obtained on the basis of the former are more accurate than those derived according to the latter. Therefore, the improved model proposed in this paper is proven to be rational and reliable given the proven validity of the existing model. Therefore, the improved model can be feasibly and credibly applied to damage accumulation and fatigue life prediction to some extent.

  4. An improved bipolar junction transistor model for electrical and radiation effects

    International Nuclear Information System (INIS)

    Kleiner, C.T.; Messenger, G.C.

    1982-01-01

    The use of bipolar technology in hardened electronic design requires an in-depth understanding of how the Bipolar Junction Transistor (BJT) behaves under normal electrical and radiation environments. Significant improvements in BJT process technology have been reported, and the successful use of sophisticated Computer Aided Design (CAD) tools has aided implementation with respect to specific families of hardened devices. The most advanced BJT model used to date is the Improved Gummel-Poon (IGP) model which is used in CAA programs such as the SPICE II and SLICE programs. The earlier Ebers-Moll model (ref 1 and 2) has also been updated to compare with the older Gummel-Poon model. This paper describes an adaptation of an existing computer model which incorporates the best features of both models into a new, more accurate model called the Improved Bipolar Junction Transistor model. This paper also describes a unique approach to data reduction for the B(I /SUB c/) and V /SUB BE/(ACT) vs I /SUB c/characterizations which has been successfully programmed in Basic using a Commodore PET computer. This model is described in the following sections

  5. An improved multi-value cellular automata model for heterogeneous bicycle traffic flow

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Sheng [College of Civil Engineering and Architecture, Zhejiang University, Hangzhou, 310058 China (China); Qu, Xiaobo [Griffith School of Engineering, Griffith University, Gold Coast, 4222 Australia (Australia); Xu, Cheng [Department of Transportation Management Engineering, Zhejiang Police College, Hangzhou, 310053 China (China); College of Transportation, Jilin University, Changchun, 130022 China (China); Ma, Dongfang, E-mail: mdf2004@zju.edu.cn [Ocean College, Zhejiang University, Hangzhou, 310058 China (China); Wang, Dianhai [College of Civil Engineering and Architecture, Zhejiang University, Hangzhou, 310058 China (China)

    2015-10-16

    This letter develops an improved multi-value cellular automata model for heterogeneous bicycle traffic flow taking the higher maximum speed of electric bicycles into consideration. The update rules of both regular and electric bicycles are improved, with maximum speeds of two and three cells per second respectively. Numerical simulation results for deterministic and stochastic cases are obtained. The fundamental diagrams and multiple states effects under different model parameters are analyzed and discussed. Field observations were made to calibrate the slowdown probabilities. The results imply that the improved extended Burgers cellular automata (IEBCA) model is more consistent with the field observations than previous models and greatly enhances the realism of the bicycle traffic model. - Highlights: • We proposed an improved multi-value CA model with higher maximum speed. • Update rules are introduced for heterogeneous bicycle traffic with maximum speed 2 and 3 cells/s. • Simulation results of the proposed model are consistent with field bicycle data. • Slowdown probabilities of both regular and electric bicycles are calibrated.

  6. An improved multi-value cellular automata model for heterogeneous bicycle traffic flow

    International Nuclear Information System (INIS)

    Jin, Sheng; Qu, Xiaobo; Xu, Cheng; Ma, Dongfang; Wang, Dianhai

    2015-01-01

    This letter develops an improved multi-value cellular automata model for heterogeneous bicycle traffic flow taking the higher maximum speed of electric bicycles into consideration. The update rules of both regular and electric bicycles are improved, with maximum speeds of two and three cells per second respectively. Numerical simulation results for deterministic and stochastic cases are obtained. The fundamental diagrams and multiple states effects under different model parameters are analyzed and discussed. Field observations were made to calibrate the slowdown probabilities. The results imply that the improved extended Burgers cellular automata (IEBCA) model is more consistent with the field observations than previous models and greatly enhances the realism of the bicycle traffic model. - Highlights: • We proposed an improved multi-value CA model with higher maximum speed. • Update rules are introduced for heterogeneous bicycle traffic with maximum speed 2 and 3 cells/s. • Simulation results of the proposed model are consistent with field bicycle data. • Slowdown probabilities of both regular and electric bicycles are calibrated

  7. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  8. Improving the Statistical Modeling of the TRMM Extreme Precipitation Monitoring System

    Science.gov (United States)

    Demirdjian, L.; Zhou, Y.; Huffman, G. J.

    2016-12-01

    This project improves upon an existing extreme precipitation monitoring system based on the Tropical Rainfall Measuring Mission (TRMM) daily product (3B42) using new statistical models. The proposed system utilizes a regional modeling approach, where data from similar grid locations are pooled to increase the quality and stability of the resulting model parameter estimates to compensate for the short data record. The regional frequency analysis is divided into two stages. In the first stage, the region defined by the TRMM measurements is partitioned into approximately 27,000 non-overlapping clusters using a recursive k-means clustering scheme. In the second stage, a statistical model is used to characterize the extreme precipitation events occurring in each cluster. Instead of utilizing the block-maxima approach used in the existing system, where annual maxima are fit to the Generalized Extreme Value (GEV) probability distribution at each cluster separately, the present work adopts the peak-over-threshold (POT) method of classifying points as extreme if they exceed a pre-specified threshold. Theoretical considerations motivate the use of the Generalized-Pareto (GP) distribution for fitting threshold exceedances. The fitted parameters can be used to construct simple and intuitive average recurrence interval (ARI) maps which reveal how rare a particular precipitation event is given its spatial location. The new methodology eliminates much of the random noise that was produced by the existing models due to a short data record, producing more reasonable ARI maps when compared with NOAA's long-term Climate Prediction Center (CPC) ground based observations. The resulting ARI maps can be useful for disaster preparation, warning, and management, as well as increased public awareness of the severity of precipitation events. Furthermore, the proposed methodology can be applied to various other extreme climate records.

  9. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li

    2011-01-01

    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  10. Improvement in genetic evaluation of female fertility in dairy cattle using multiple-trait models including milk production traits

    DEFF Research Database (Denmark)

    Sun, C; Madsen, P; Lund, M S

    2010-01-01

    This study investigated the improvement in genetic evaluation of fertility traits by using production traits as secondary traits (MILK = 305-d milk yield, FAT = 305-d fat yield, and PROT = 305-d protein yield). Data including 471,742 records from first lactations of Denmark Holstein cows, covering...... the years of inseminations during first lactations from 1995 to 2004, were analyzed. Six fertility traits (i.e., interval in days from calving to first insemination, calving interval, days open, interval in days from first to last insemination, numbers of inseminations per conception, and nonreturn rate...... stability and predictive ability than single-trait models for all the fertility traits, except for nonreturn rate within 56 d after first service. The stability and predictive ability for the model including MILK or PROT were similar to the model including all 3 milk production traits and better than...

  11. Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks

    OpenAIRE

    Zhelezniak, Vitalii; Busbridge, Dan; Shen, April; Smith, Samuel L.; Hammerla, Nils Y.

    2018-01-01

    Experimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks. We provide a simple yet rigorous explanation for this behaviour by introducing the concept of an optimal representation space, in which semantically close symbols are mapped to representations that are close under a similarity measure induced by the model's objective function. In addition, we present a straightforward procedure that, without any retraining or architectura...

  12. Improved Inference of Heteroscedastic Fixed Effects Models

    Directory of Open Access Journals (Sweden)

    Afshan Saeed

    2016-12-01

    Full Text Available Heteroscedasticity is a stern problem that distorts estimation and testing of panel data model (PDM. Arellano (1987 proposed the White (1980 estimator for PDM with heteroscedastic errors but it provides erroneous inference for the data sets including high leverage points. In this paper, our attempt is to improve heteroscedastic consistent covariance matrix estimator (HCCME for panel dataset with high leverage points. To draw robust inference for the PDM, our focus is to improve kernel bootstrap estimators, proposed by Racine and MacKinnon (2007. The Monte Carlo scheme is used for assertion of the results.

  13. A unified frame of predicting side effects of drugs by using linear neighborhood similarity.

    Science.gov (United States)

    Zhang, Wen; Yue, Xiang; Liu, Feng; Chen, Yanlin; Tu, Shikui; Zhang, Xining

    2017-12-14

    Drug side effects are one of main concerns in the drug discovery, which gains wide attentions. Investigating drug side effects is of great importance, and the computational prediction can help to guide wet experiments. As far as we known, a great number of computational methods have been proposed for the side effect predictions. The assumption that similar drugs may induce same side effects is usually employed for modeling, and how to calculate the drug-drug similarity is critical in the side effect predictions. In this paper, we present a novel measure of drug-drug similarity named "linear neighborhood similarity", which is calculated in a drug feature space by exploring linear neighborhood relationship. Then, we transfer the similarity from the feature space into the side effect space, and predict drug side effects by propagating known side effect information through a similarity-based graph. Under a unified frame based on the linear neighborhood similarity, we propose method "LNSM" and its extension "LNSM-SMI" to predict side effects of new drugs, and propose the method "LNSM-MSE" to predict unobserved side effect of approved drugs. We evaluate the performances of LNSM and LNSM-SMI in predicting side effects of new drugs, and evaluate the performances of LNSM-MSE in predicting missing side effects of approved drugs. The results demonstrate that the linear neighborhood similarity can improve the performances of side effect prediction, and the linear neighborhood similarity-based methods can outperform existing side effect prediction methods. More importantly, the proposed methods can predict side effects of new drugs as well as unobserved side effects of approved drugs under a unified frame.

  14. Improved Kinetic Models for High-Speed Combustion Simulation

    National Research Council Canada - National Science Library

    Montgomery, C. J; Tang, Q; Sarofim, A. F; Bockelie, M. J; Gritton, J. K; Bozzelli, J. W; Gouldin, F. C; Fisher, E. M; Chakravarthy, S

    2008-01-01

    Report developed under an STTR contract. The overall goal of this STTR project has been to improve the realism of chemical kinetics in computational fluid dynamics modeling of hydrocarbon-fueled scramjet combustors...

  15. Improvement of Axial Reflector Cross Section Generation Model for PWR Core Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Cheon Bo; Lee, Kyung Hoon; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    This paper covers the study for improvement of axial reflector XS generation model. In the next section, the improved 1D core model is represented in detail. Reflector XS generated by the improved model is compared to that of the conventional model in the third section. Nuclear design parameters generated by these two XS sets are also covered in that section. Significant of this study is discussed in the last section. Two-step procedure has been regarded as the most practical approach for reactor core designs because it offers core design parameters quite rapidly within acceptable range. Thus this approach is adopted for SMART (System-integrated Modular Advanced Reac- Tor) core design in KAERI with the DeCART2D1.1/ MASTER4.0 (hereafter noted as DeCART2D/ MASTER) code system. Within the framework of the two-step procedure based SMART core design, various researches have been studied to improve the core design reliability and efficiency. One of them is improvement of reflector cross section (XS) generation models. While the conventional FA/reflector two-node model used for most core designs to generate reflector XS cannot consider the actual configuration of fuel rods that intersect at right angles to axial reflectors, the revised model reflects the axial fuel configuration by introducing the radially simplified core model. The significance of the model revision is evaluated by observing HGC generated by DeCART2D, reflector XS, and core design parameters generated by adopting the two models. And it is verified that about 30 ppm CBC error can be reduced and maximum Fq error decreases from about 6 % to 2.5 % by applying the revised model. Error of AO and axial power shapes are also reduced significantly. Therefore it can be concluded that the simplified 1D core model improves the accuracy of the axial reflector XS and leads to the two-step procedure reliability enhancement. Since it is hard for core designs to be free from the two-step approach, it is necessary to find

  16. Narrowing the agronomic yield gap with improved nitrogen use efficiency: a modeling approach.

    Science.gov (United States)

    Ahrens, T D; Lobell, D B; Ortiz-Monasterio, J I; Li, Y; Matson, P A

    2010-01-01

    Improving nitrogen use efficiency (NUE) in the major cereals is critical for more sustainable nitrogen use in high-input agriculture, but our understanding of the potential for NUE improvement is limited by a paucity of reliable on-farm measurements. Limited on-farm data suggest that agronomic NUE (AE(N)) is lower and more variable than data from trials conducted at research stations, on which much of our understanding of AE(N) has been built. The purpose of this study was to determine the magnitude and causes of variability in AE(N) across an agricultural region, which we refer to as the achievement distribution of AE(N). The distribution of simulated AE(N) in 80 farmers' fields in an irrigated wheat system in the Yaqui Valley, Mexico, was compared with trials at a local research center (International Wheat and Maize Improvement Center; CIMMYT). An agroecosystem simulation model WNMM was used to understand factors controlling yield, AE(N), gaseous N emissions, and nitrate leaching in the region. Simulated AE(N) in the Yaqui Valley was highly variable, and mean on-farm AE(N) was 44% lower than trials with similar fertilization rates at CIMMYT. Variability in residual N supply was the most important factor determining simulated AE(N). Better split applications of N fertilizer led to almost a doubling of AE(N), increased profit, and reduced N pollution, and even larger improvements were possible with technologies that allow for direct measurement of soil N supply and plant N demand, such as site-specific nitrogen management.

  17. On Improving 4-km Mesoscale Model Simulations

    Science.gov (United States)

    Deng, Aijun; Stauffer, David R.

    2006-03-01

    A previous study showed that use of analysis-nudging four-dimensional data assimilation (FDDA) and improved physics in the fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) produced the best overall performance on a 12-km-domain simulation, based on the 18 19 September 1983 Cross-Appalachian Tracer Experiment (CAPTEX) case. However, reducing the simulated grid length to 4 km had detrimental effects. The primary cause was likely the explicit representation of convection accompanying a cold-frontal system. Because no convective parameterization scheme (CPS) was used, the convective updrafts were forced on coarser-than-realistic scales, and the rainfall and the atmospheric response to the convection were too strong. The evaporative cooling and downdrafts were too vigorous, causing widespread disruption of the low-level winds and spurious advection of the simulated tracer. In this study, a series of experiments was designed to address this general problem involving 4-km model precipitation and gridpoint storms and associated model sensitivities to the use of FDDA, planetary boundary layer (PBL) turbulence physics, grid-explicit microphysics, a CPS, and enhanced horizontal diffusion. Some of the conclusions include the following: 1) Enhanced parameterized vertical mixing in the turbulent kinetic energy (TKE) turbulence scheme has shown marked improvements in the simulated fields. 2) Use of a CPS on the 4-km grid improved the precipitation and low-level wind results. 3) Use of the Hong and Pan Medium-Range Forecast PBL scheme showed larger model errors within the PBL and a clear tendency to predict much deeper PBL heights than the TKE scheme. 4) Combining observation-nudging FDDA with a CPS produced the best overall simulations. 5) Finer horizontal resolution does not always produce better simulations, especially in convectively unstable environments, and a new CPS suitable for 4-km resolution is needed. 6

  18. Gaussian mixture models and semantic gating improve reconstructions from human brain activity

    Directory of Open Access Journals (Sweden)

    Sanne eSchoenmakers

    2015-01-01

    Full Text Available Better acquisition protocols and analysis techniques are making it possible to use fMRI to obtain highly detailed visualizations of brain processes. In particular we focus on the reconstruction of natural images from BOLD responses in visual cortex. We expand our linear Gaussian framework for percept decoding with Gaussian mixture models to better represent the prior distribution of natural images. Reconstruction of such images then boils down to probabilistic inference in a hybrid Bayesian network. In our set-up, different mixture components correspond to different character categories. Our framework can automatically infer higher-order semantic categories from lower-level brain areas. Furthermore the framework can gate semantic information from higher-order brain areas to enforce the correct category during reconstruction. When categorical information is not available, we show that automatically learned clusters in the data give a similar improvement in reconstruction. The hybrid Bayesian network leads to highly accurate reconstructions in both supervised and unsupervised settings.

  19. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw

    2017-09-01

    Full Text Available Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW in Interior Alaska: one nearly permafrost-free (LowP sub-basin and one permafrost-dominated (HighP sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC mesoscale hydrological model to simulate runoff, evapotranspiration (ET, and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub

  20. Attitude Similarity and Therapist Credibility as Predictors of Attitude Change and Improvement in Psychotherapy

    Science.gov (United States)

    Beutler, Larry E.; And Others

    1975-01-01

    This study attempts to (1) assess the effects of therapist credibility and patient-therapist similarity on interpersonal persuasion; and (2) to further assess the relationship between patient attitude change and psychotherapy outcome. (HMV)

  1. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    Science.gov (United States)

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  2. Algorithm Research of Individualized Travelling Route Recommendation Based on Similarity

    Directory of Open Access Journals (Sweden)

    Xue Shan

    2015-01-01

    Full Text Available Although commercial recommendation system has made certain achievement in travelling route development, the recommendation system is facing a series of challenges because of people’s increasing interest in travelling. It is obvious that the core content of the recommendation system is recommendation algorithm. The advantages of recommendation algorithm can bring great effect to the recommendation system. Based on this, this paper applies traditional collaborative filtering algorithm for analysis. Besides, illustrating the deficiencies of the algorithm, such as the rating unicity and rating matrix sparsity, this paper proposes an improved algorithm combing the multi-similarity algorithm based on user and the element similarity algorithm based on user, so as to compensate for the deficiencies that traditional algorithm has within a controllable range. Experimental results have shown that the improved algorithm has obvious advantages in comparison with the traditional one. The improved algorithm has obvious effect on remedying the rating matrix sparsity and rating unicity.

  3. Similar estimates of temperature impacts on global wheat yield by three independent methods

    DEFF Research Database (Denmark)

    Liu, Bing; Asseng, Senthold; Müller, Christoph

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produ......-method ensemble, it was possible to quantify ‘method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.......The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce...... similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries...

  4. Similar Estimates of Temperature Impacts on Global Wheat Yield by Three Independent Methods

    Science.gov (United States)

    Liu, Bing; Asseng, Senthold; Muller, Christoph; Ewart, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; hide

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify 'method uncertainty' in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  5. Similar estimates of temperature impacts on global wheat yield by three independent methods

    Science.gov (United States)

    Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Rosenzweig, Cynthia; Aggarwal, Pramod K.; Alderman, Phillip D.; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andy; Deryng, Delphine; Sanctis, Giacomo De; Doltra, Jordi; Fereres, Elias; Folberth, Christian; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A.; Izaurralde, Roberto C.; Jabloun, Mohamed; Jones, Curtis D.; Kersebaum, Kurt C.; Kimball, Bruce A.; Koehler, Ann-Kristin; Kumar, Soora Naresh; Nendel, Claas; O'Leary, Garry J.; Olesen, Jørgen E.; Ottman, Michael J.; Palosuo, Taru; Prasad, P. V. Vara; Priesack, Eckart; Pugh, Thomas A. M.; Reynolds, Matthew; Rezaei, Ehsan E.; Rötter, Reimund P.; Schmid, Erwin; Semenov, Mikhail A.; Shcherbak, Iurii; Stehfest, Elke; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wall, Gerard W.; Wang, Enli; White, Jeffrey W.; Wolf, Joost; Zhao, Zhigan; Zhu, Yan

    2016-12-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify `method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  6. Learning semantic and visual similarity for endomicroscopy video retrieval.

    Science.gov (United States)

    Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2012-06-01

    Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them

  7. Forecasting experiments of a dynamical-statistical model of the sea surface temperature anomaly field based on the improved self-memorization principle

    Science.gov (United States)

    Hong, Mei; Chen, Xi; Zhang, Ren; Wang, Dong; Shen, Shuanghe; Singh, Vijay P.

    2018-04-01

    With the objective of tackling the problem of inaccurate long-term El Niño-Southern Oscillation (ENSO) forecasts, this paper develops a new dynamical-statistical forecast model of the sea surface temperature anomaly (SSTA) field. To avoid single initial prediction values, a self-memorization principle is introduced to improve the dynamical reconstruction model, thus making the model more appropriate for describing such chaotic systems as ENSO events. The improved dynamical-statistical model of the SSTA field is used to predict SSTA in the equatorial eastern Pacific and during El Niño and La Niña events. The long-term step-by-step forecast results and cross-validated retroactive hindcast results of time series T1 and T2 are found to be satisfactory, with a Pearson correlation coefficient of approximately 0.80 and a mean absolute percentage error (MAPE) of less than 15 %. The corresponding forecast SSTA field is accurate in that not only is the forecast shape similar to the actual field but also the contour lines are essentially the same. This model can also be used to forecast the ENSO index. The temporal correlation coefficient is 0.8062, and the MAPE value of 19.55 % is small. The difference between forecast results in spring and those in autumn is not high, indicating that the improved model can overcome the spring predictability barrier to some extent. Compared with six mature models published previously, the present model has an advantage in prediction precision and length, and is a novel exploration of the ENSO forecast method.

  8. Towards improved modeling of steel-concrete composite wall elements

    International Nuclear Information System (INIS)

    Vecchio, Frank J.; McQuade, Ian

    2011-01-01

    Highlights: → Improved analysis of double skinned steel concrete composite containment walls. → Smeared rotating crack concept applied in formulation of new analytical model. → Model implemented into finite element program; numerically stable and robust. → Models behavior of shear-critical elements with greater ease and improved accuracy. → Accurate assessments of strength, deformation and failure mode of test specimens. - Abstract: The Disturbed Stress Field Model, a smeared rotating crack model for reinforced concrete based on the Modified Compression Field Theory, is adapted to the analysis of double-skin steel-concrete wall elements. The computational model is then incorporated into a two-dimensional nonlinear finite element analysis algorithm. Verification studies are undertaken by modeling various test specimens, including panel elements subject to uniaxial compression, panel elements subjected to in-plane shear, and wall specimens subjected to reversed cyclic lateral displacements. In all cases, the analysis model is found to provide accurate calculations of structural load capacities, pre- and post-peak displacement responses, post-peak ductility, chronology of damage, and ultimate failure mode. Minor deficiencies are found in regards to the accurate portrayal of faceplate buckling and the effects of interfacial slip between the faceplates and the concrete. Other aspects of the modeling procedure that are in need of further research and development are also identified and discussed.

  9. An Improved Inventory Control Model for the Brazilian Navy Supply System

    Science.gov (United States)

    2001-12-01

    Portuguese Centro de Controle de Inventario da Marinha, the Brazilian Navy Inventory Control Point (ICP) developed an empirical model called SPAADA...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS Approved for public release; distribution is unlimited AN IMPROVED INVENTORY CONTROL ...AN IMPROVED INVENTORY CONTROL MODEL FOR THE BRAZILIAN NAVY SUPPLY SYSTEM Contract Number Grant Number Program Element Number Author(s) Moreira

  10. How can model comparison help improving species distribution models?

    Directory of Open Access Journals (Sweden)

    Emmanuel Stephan Gritti

    Full Text Available Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs. However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  11. An improved active contour model for glacial lake extraction

    Science.gov (United States)

    Zhao, H.; Chen, F.; Zhang, M.

    2017-12-01

    Active contour model is a widely used method in visual tracking and image segmentation. Under the driven of objective function, the initial curve defined in active contour model will evolve to a stable condition - a desired result in given image. As a typical region-based active contour model, C-V model has a good effect on weak boundaries detection and anti noise ability which shows great potential in glacial lake extraction. Glacial lake is a sensitive indicator for reflecting global climate change, therefore accurate delineate glacial lake boundaries is essential to evaluate hydrologic environment and living environment. However, the current method in glacial lake extraction mainly contains water index method and recognition classification method are diffcult to directly applied in large scale glacial lake extraction due to the diversity of glacial lakes and masses impacted factors in the image, such as image noise, shadows, snow and ice, etc. Regarding the abovementioned advantanges of C-V model and diffcults in glacial lake extraction, we introduce the signed pressure force function to improve the C-V model for adapting to processing of glacial lake extraction. To inspect the effect of glacial lake extraction results, three typical glacial lake development sites were selected, include Altai mountains, Centre Himalayas, South-eastern Tibet, and Landsat8 OLI imagery was conducted as experiment data source, Google earth imagery as reference data for varifying the results. The experiment consequence suggests that improved active contour model we proposed can effectively discriminate the glacial lakes from complex backgound with a higher Kappa Coefficient - 0.895, especially in some small glacial lakes which belongs to weak information in the image. Our finding provide a new approach to improved accuracy under the condition of large proportion of small glacial lakes and the possibility for automated glacial lake mapping in large-scale area.

  12. Towards Personalized Medicine: Leveraging Patient Similarity and Drug Similarity Analytics

    Science.gov (United States)

    Zhang, Ping; Wang, Fei; Hu, Jianying; Sorrentino, Robert

    2014-01-01

    The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance. PMID:25717413

  13. The synaptonemal complex of basal metazoan hydra: more similarities to vertebrate than invertebrate meiosis model organisms.

    Science.gov (United States)

    Fraune, Johanna; Wiesner, Miriam; Benavente, Ricardo

    2014-03-20

    The synaptonemal complex (SC) is an evolutionarily well-conserved structure that mediates chromosome synapsis during prophase of the first meiotic division. Although its structure is conserved, the characterized protein components in the current metazoan meiosis model systems (Drosophila melanogaster, Caenorhabditis elegans, and Mus musculus) show no sequence homology, challenging the question of a single evolutionary origin of the SC. However, our recent studies revealed the monophyletic origin of the mammalian SC protein components. Many of them being ancient in Metazoa and already present in the cnidarian Hydra. Remarkably, a comparison between different model systems disclosed a great similarity between the SC components of Hydra and mammals while the proteins of the ecdysozoan systems (D. melanogaster and C. elegans) differ significantly. In this review, we introduce the basal-branching metazoan species Hydra as a potential novel invertebrate model system for meiosis research and particularly for the investigation of SC evolution, function and assembly. Also, available methods for SC research in Hydra are summarized. Copyright © 2014. Published by Elsevier Ltd.

  14. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  15. Modelled female sale options demonstrate improved profitability in northern beef herds.

    Science.gov (United States)

    Niethe, G E; Holmes, W E

    2008-12-01

    To examine the impact of improving the average value of cows sold, the risk of decreasing the number weaned, and total sales on the profitability of northern Australian cattle breeding properties. Gather, model and interpret breeder herd performances and production parameters on properties from six beef-producing regions in northern Australia. Production parameters, prices, costs and herd structure were entered into a herd simulation model for six northern Australian breeding properties that spay females to enhance their marketing options. After the data were validated by management, alternative management strategies were modelled using current market prices and most likely herd outcomes. The model predicted a close relationship between the average sale value of cows, the total herd sales and the gross margin/adult equivalent. Keeping breeders out of the herd to fatten generally improves their sale value, and this can be cost-effective, despite the lower number of progeny produced and the subsequent reduction in total herd sales. Furthermore, if the price of culled cows exceeds the price of culled heifers, provided there are sufficient replacement pregnant heifers available to maintain the breeder herd nucleus, substantial gains in profitability can be obtained by decreasing the age at which cows are culled from the herd. Generalised recommendations on improving reproductive performance are not necessarily the most cost-effective strategy to improve breeder herd profitability. Judicious use of simulation models is essential to help develop the best turnoff strategies for females and to improve station profitability.

  16. Tissue Feature-Based and Segmented Deformable Image Registration for Improved Modeling of Shear Movement of Lungs

    International Nuclear Information System (INIS)

    Xie Yaoqin; Chao Ming; Xing Lei

    2009-01-01

    Purpose: To report a tissue feature-based image registration strategy with explicit inclusion of the differential motions of thoracic structures. Methods and Materials: The proposed technique started with auto-identification of a number of corresponding points with distinct tissue features. The tissue feature points were found by using the scale-invariant feature transform method. The control point pairs were then sorted into different 'colors' according to the organs in which they resided and used to model the involved organs individually. A thin-plate spline method was used to register a structure characterized by the control points with a given 'color.' The proposed technique was applied to study a digital phantom case and 3 lung and 3 liver cancer patients. Results: For the phantom case, a comparison with the conventional thin-plate spline method showed that the registration accuracy was markedly improved when the differential motions of the lung and chest wall were taken into account. On average, the registration error and standard deviation of the 15 points against the known ground truth were reduced from 3.0 to 0.5 mm and from 1.5 to 0.2 mm, respectively, when the new method was used. A similar level of improvement was achieved for the clinical cases. Conclusion: The results of our study have shown that the segmented deformable approach provides a natural and logical solution to model the discontinuous organ motions and greatly improves the accuracy and robustness of deformable registration.

  17. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models

    Science.gov (United States)

    Marquette, Michele L.; Sognier, Marguerite A.

    2013-01-01

    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  18. On the importance of paleoclimate modelling for improving predictions of future climate change

    Directory of Open Access Journals (Sweden)

    J. C. Hargreaves

    2009-12-01

    Full Text Available We use an ensemble of runs from the MIROC3.2 AGCM with slab-ocean to explore the extent to which mid-Holocene simulations are relevant to predictions of future climate change. The results are compared with similar analyses for the Last Glacial Maximum (LGM and pre-industrial control climate. We suggest that the paleoclimate epochs can provide some independent validation of the models that is also relevant for future predictions. Considering the paleoclimate epochs, we find that the stronger global forcing and hence larger climate change at the LGM makes this likely to be the more powerful one for estimating the large-scale changes that are anticipated due to anthropogenic forcing. The phenomena in the mid-Holocene simulations which are most strongly correlated with future changes (i.e., the mid to high northern latitude land temperature and monsoon precipitation do, however, coincide with areas where the LGM results are not correlated with future changes, and these are also areas where the paleodata indicate significant climate changes have occurred. Thus, these regions and phenomena for the mid-Holocene may be useful for model improvement and validation.

  19. Unveiling Music Structure Via PLSA Similarity Fusion

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Meng, Anders; Petersen, Kaare Brandt

    2007-01-01

    Nowadays there is an increasing interest in developing methods for building music recommendation systems. In order to get a satisfactory performance from such a system, one needs to incorporate as much information about songs similarity as possible; however, how to do so is not obvious. In this p......Nowadays there is an increasing interest in developing methods for building music recommendation systems. In order to get a satisfactory performance from such a system, one needs to incorporate as much information about songs similarity as possible; however, how to do so is not obvious...... observed similarities can be satisfactorily explained using the latent semantics. Additionally, this approach significantly simplifies the song retrieval phase, leading to a more practical system implementation. The suitability of the PLSA model for representing music structure is studied in a simplified...

  20. Clustering biomolecular complexes by residue contacts similarity

    NARCIS (Netherlands)

    Garcia Lopes Maia Rodrigues, João; Trellet, Mikaël; Schmitz, Christophe; Kastritis, Panagiotis; Karaca, Ezgi; Melquiond, Adrien S J; Bonvin, Alexandre M J J; Garcia Lopes Maia Rodrigues, João

    Inaccuracies in computational molecular modeling methods are often counterweighed by brute-force generation of a plethora of putative solutions. These are then typically sieved via structural clustering based on similarity measures such as the root mean square deviation (RMSD) of atomic positions.

  1. A behavioral similarity measure between labeled Petri nets based on principal transition sequences

    NARCIS (Netherlands)

    Wang, J.; He, T.; Wen, L.; Wu, N.; Hofstede, ter A.H.M.; Su, J.; Meersman, R.; Dillon, T.S.; Herrero, P.

    2010-01-01

    Being able to determine the degree of similarity between process models is important for management, reuse, and analysis of business process models. In this paper we propose a novel method to determine the degree of similarity between process models, which exploits their semantics. Our approach is

  2. Triple Diagonal modeling: A mechanism to focus productivity improvement for business success

    Energy Technology Data Exchange (ETDEWEB)

    Levine, L.O. [Pacific Northwest Lab., Richland, WA (United States); Villareal, L.D. [Army Depot, Corpus Christi, TX (United States)

    1993-09-01

    Triple Diagonal (M) modeling is a technique to help quickly diagnose an organization`s existing production system and to identify significant improvement opportunities in executing, controlling, and planning operations. TD modeling is derived from ICAM Definition Language (IDEF 0)-also known as Structured Analysis and Design Technique. It has been used successfully at several Department of Defense remanufacturing facilities trying to accomplish significant production system modernization. TD has several advantages over other modeling techniques. First, it quickly does ``As-ls`` analysis and then moves on to identify improvements. Second, creating one large diagram makes it easier to share the TD model throughout an organization, rather than the many linked 8 1/2 {times} 11`` drawings used in traditional decomposition approaches. Third, it acts as a communication mechanism to share understanding about improvement opportunities that may cross existing functional/organizational boundaries. Finally, TD acts as a vehicle to build a consensus on a prioritized list of improvement efforts that ``hangs togethers as an agenda for systemic changes in the production system and the improved integration of support functions.

  3. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  4. Fold-recognition and comparative modeling of human α2,3-sialyltransferases reveal their sequence and structural similarities to CstII from Campylobacter jejuni

    Directory of Open Access Journals (Sweden)

    Balaji Petety V

    2006-04-01

    Full Text Available Abstract Background The 3-D structure of none of the eukaryotic sialyltransferases (SiaTs has been determined so far. Sequence alignment algorithms such as BLAST and PSI-BLAST could not detect a homolog of these enzymes from the protein databank. SiaTs, thus, belong to the hard/medium target category in the CASP experiments. The objective of the current work is to model the 3-D structures of human SiaTs which transfer the sialic acid in α2,3-linkage viz., ST3Gal I, II, III, IV, V, and VI, using fold-recognition and comparative modeling methods. The pair-wise sequence similarity among these six enzymes ranges from 41 to 63%. Results Unlike the sequence similarity servers, fold-recognition servers identified CstII, a α2,3/8 dual-activity SiaT from Campylobacter jejuni as the homolog of all the six ST3Gals; the level of sequence similarity between CstII and ST3Gals is only 15–20% and the similarity is restricted to well-characterized motif regions of ST3Gals. Deriving template-target sequence alignments for the entire ST3Gal sequence was not straightforward: the fold-recognition servers could not find a template for the region preceding the L-motif and that between the L- and S-motifs. Multiple structural templates were identified to model these regions and template identification-modeling-evaluation had to be performed iteratively to choose the most appropriate templates. The modeled structures have acceptable stereochemical properties and are also able to provide qualitative rationalizations for some of the site-directed mutagenesis results reported in literature. Apart from the predicted models, an unexpected but valuable finding from this study is the sequential and structural relatedness of family GT42 and family GT29 SiaTs. Conclusion The modeled 3-D structures can be used for docking and other modeling studies and for the rational identification of residues to be mutated to impart desired properties such as altered stability, substrate

  5. Assessing semantic similarity of texts - Methods and algorithms

    Science.gov (United States)

    Rozeva, Anna; Zerkova, Silvia

    2017-12-01

    Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.

  6. Improvement of Mars surface snow albedo modeling in LMD Mars GCM with SNICAR

    Science.gov (United States)

    Singh, D.; Flanner, M.; Millour, E.

    2017-12-01

    The current version of Laboratoire de Météorologie Dynamique (LMD) Mars GCM (original-MGCM) uses annually repeating (prescribed) albedo values from the Thermal Emission Spectrometer observations. We integrate the Snow, Ice, and Aerosol Radiation (SNICAR) model with MGCM (SNICAR-MGCM) to prognostically determine H2O and CO2 ice cap albedos interactively in the model. Over snow-covered regions mean SNICAR-MGCM albedo is higher by about 0.034 than original-MGCM. Changes in albedo and surface dust content also impact the shortwave energy flux at the surface. SNICAR-MGCM model simulates a change of -1.26 W/m2 shortwave flux on a global scale. Globally, net CO2 ice deposition increases by about 4% over one Martian annual cycle as compared to original-MGCM simulations. SNICAR integration reduces the net mean global surface temperature, and the global surface pressure of Mars by about 0.87% and 2.5% respectively. Changes in albedo also show a similar distribution as dust deposition over the globe. The SNICAR-MGCM model generates albedos with higher sensitivity to surface dust content as compared to original-MGCM. For snow-covered regions, we improve the correlation between albedo and optical depth of dust from -0.91 to -0.97 with SNICAR-MGCM as compared to original-MGCM. Using new diagnostic capabilities with this model, we find that cryospheric surfaces (with dust) increase the global surface albedo of Mars by 0.022. The cryospheric effect is severely muted by dust in snow, however, which acts to decrease the planet-mean surface albedo by 0.06.

  7. Molecular similarity measures.

    Science.gov (United States)

    Maggiora, Gerald M; Shanmugasundaram, Veerabahu

    2011-01-01

    Molecular similarity is a pervasive concept in chemistry. It is essential to many aspects of chemical reasoning and analysis and is perhaps the fundamental assumption underlying medicinal chemistry. Dissimilarity, the complement of similarity, also plays a major role in a growing number of applications of molecular diversity in combinatorial chemistry, high-throughput screening, and related fields. How molecular information is represented, called the representation problem, is important to the type of molecular similarity analysis (MSA) that can be carried out in any given situation. In this work, four types of mathematical structure are used to represent molecular information: sets, graphs, vectors, and functions. Molecular similarity is a pairwise relationship that induces structure into sets of molecules, giving rise to the concept of chemical space. Although all three concepts - molecular similarity, molecular representation, and chemical space - are treated in this chapter, the emphasis is on molecular similarity measures. Similarity measures, also called similarity coefficients or indices, are functions that map pairs of compatible molecular representations that are of the same mathematical form into real numbers usually, but not always, lying on the unit interval. This chapter presents a somewhat pedagogical discussion of many types of molecular similarity measures, their strengths and limitations, and their relationship to one another. An expanded account of the material on chemical spaces presented in the first edition of this book is also provided. It includes a discussion of the topography of activity landscapes and the role that activity cliffs in these landscapes play in structure-activity studies.

  8. Exploiting structure similarity in refinement: automated NCS and target-structure restraints in BUSTER

    Energy Technology Data Exchange (ETDEWEB)

    Smart, Oliver S., E-mail: osmart@globalphasing.com; Womack, Thomas O.; Flensburg, Claus; Keller, Peter; Paciorek, Włodek; Sharff, Andrew; Vonrhein, Clemens; Bricogne, Gérard [Global Phasing Ltd, Sheraton House, Castle Park, Cambridge CB3 0AX (United Kingdom)

    2012-04-01

    Local structural similarity restraints (LSSR) provide a novel method for exploiting NCS or structural similarity to an external target structure. Two examples are given where BUSTER re-refinement of PDB entries with LSSR produces marked improvements, enabling further structural features to be modelled. Maximum-likelihood X-ray macromolecular structure refinement in BUSTER has been extended with restraints facilitating the exploitation of structural similarity. The similarity can be between two or more chains within the structure being refined, thus favouring NCS, or to a distinct ‘target’ structure that remains fixed during refinement. The local structural similarity restraints (LSSR) approach considers all distances less than 5.5 Å between pairs of atoms in the chain to be restrained. For each, the difference from the distance between the corresponding atoms in the related chain is found. LSSR applies a restraint penalty on each difference. A functional form that reaches a plateau for large differences is used to avoid the restraints distorting parts of the structure that are not similar. Because LSSR are local, there is no need to separate out domains. Some restraint pruning is still necessary, but this has been automated. LSSR have been available to academic users of BUSTER since 2009 with the easy-to-use -autoncs and @@target target.pdb options. The use of LSSR is illustrated in the re-refinement of PDB entries http://scripts.iucr.org/cgi-bin/cr.cgi?rm, where -target enables the correct ligand-binding structure to be found, and http://scripts.iucr.org/cgi-bin/cr.cgi?rm, where -autoncs contributes to the location of an additional copy of the cyclic peptide ligand.

  9. Improved Collaborative Filtering Algorithm using Topic Model

    Directory of Open Access Journals (Sweden)

    Liu Na

    2016-01-01

    Full Text Available Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users or items is calculated based on rating mostly, without considering explicit properties of users or items involved. In this paper, we proposed collaborative filtering algorithm using topic model. We describe user-item matrix as document-word matrix and user are represented as random mixtures over item, each item is characterized by a distribution over users. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on Movie Lens data sets.

  10. Improving Bioenergy Crops through Dynamic Metabolic Modeling

    Directory of Open Access Journals (Sweden)

    Mojdeh Faraji

    2017-10-01

    Full Text Available Enormous advances in genetics and metabolic engineering have made it possible, in principle, to create new plants and crops with improved yield through targeted molecular alterations. However, while the potential is beyond doubt, the actual implementation of envisioned new strains is often difficult, due to the diverse and complex nature of plants. Indeed, the intrinsic complexity of plants makes intuitive predictions difficult and often unreliable. The hope for overcoming this challenge is that methods of data mining and computational systems biology may become powerful enough that they could serve as beneficial tools for guiding future experimentation. In the first part of this article, we review the complexities of plants, as well as some of the mathematical and computational methods that have been used in the recent past to deepen our understanding of crops and their potential yield improvements. In the second part, we present a specific case study that indicates how robust models may be employed for crop improvements. This case study focuses on the biosynthesis of lignin in switchgrass (Panicum virgatum. Switchgrass is considered one of the most promising candidates for the second generation of bioenergy production, which does not use edible plant parts. Lignin is important in this context, because it impedes the use of cellulose in such inedible plant materials. The dynamic model offers a platform for investigating the pathway behavior in transgenic lines. In particular, it allows predictions of lignin content and composition in numerous genetic perturbation scenarios.

  11. Experiential Learning Model on Entrepreneurship Subject to Improve Students’ Soft Skills

    Directory of Open Access Journals (Sweden)

    Lina Rifda Naufalin

    2016-06-01

    Full Text Available This research aims to improve students’ soft skills on entrepreneurship subject by using experiential learning model. It was expected that the learning model could upgrade students’ soft skills which were indicated by the higher confidence, result and job oriented, being courageous to take risks, leadership, originality, and future-oriented. It was a class action research using Kemmis and Mc Tagart’s design model. The research was conducted for two cycles. The subject of the study was economics education students in the year of 2015/2016.  Findings show that the experiential learning model could improve students’ soft skills. The research showed that there is increased at the dimension of confidence by 52.1%, result-oriented by 22.9%, being courageous to take risks by 10.4%, leadership by 12.5%, originality by 10.4%, and future-oriented by 18.8%. It could be concluded that the experiential learning model is effective model to improve students’ soft skills on entrepreneurship subject. Dimension of confidence has the highest rise. Students’ soft skills are shaped through the continuous stimulus when they get involved at the implementation.

  12. Contextual Factors for Finding Similar Experts

    DEFF Research Database (Denmark)

    Hofmann, Katja; Balog, Krisztian; Bogers, Toine

    2010-01-01

    -seeking models, are rarely taken into account. In this article, we extend content-based expert-finding approaches with contextual factors that have been found to influence human expert finding. We focus on a task of science communicators in a knowledge-intensive environment, the task of finding similar experts......, given an example expert. Our approach combines expertise-seeking and retrieval research. First, we conduct a user study to identify contextual factors that may play a role in the studied task and environment. Then, we design expert retrieval models to capture these factors. We combine these with content......-based retrieval models and evaluate them in a retrieval experiment. Our main finding is that while content-based features are the most important, human participants also take contextual factors into account, such as media experience and organizational structure. We develop two principled ways of modeling...

  13. Improved framework model to allocate optimal rainwater harvesting sites in small watersheds for agro-forestry uses

    Science.gov (United States)

    Terêncio, D. P. S.; Sanches Fernandes, L. F.; Cortes, R. M. V.; Pacheco, F. A. L.

    2017-07-01

    This study introduces an improved rainwater harvesting (RWH) suitability model to help the implementation of agro-forestry projects (irrigation, wildfire combat) in catchments. The model combines a planning workflow to define suitability of catchments based on physical, socio-economic and ecologic variables, with an allocation workflow to constrain suitable RWH sites as function of project specific features (e.g., distance from rainfall collection to application area). The planning workflow comprises a Multi Criteria Analysis (MCA) implemented on a Geographic Information System (GIS), whereas the allocation workflow is based on a multiple-parameter ranking analysis. When compared to other similar models, improvement comes with the flexible weights of MCA and the entire allocation workflow. The method is tested in a contaminated watershed (the Ave River basin) located in Portugal. The pilot project encompasses the irrigation of a 400 ha crop land that consumes 2.69 Mm3 of water per year. The application of harvested water in the irrigation replaces the use of stream water with excessive anthropogenic nutrients that may raise nitrosamines in the food and accumulation in the food chain, with severe consequences to human health (cancer). The selected rainfall collection catchment is capable to harvest 12 Mm3·yr-1 (≈ 4.5 × the requirement) and is roughly 3 km far from the application area assuring crop irrigation by gravity flow with modest transport costs. The RWH system is an 8-meter high that can be built in earth with reduced costs.

  14. Scaling Analysis of the Single-Phase Natural Circulation: the Hydraulic Similarity

    International Nuclear Information System (INIS)

    Yu, Xin-Guo; Choi, Ki-Yong

    2015-01-01

    These passive safety systems all rely on the natural circulation to cool down the reactor cores during an accident. Thus, a robust and accurate scaling methodology must be developed and employed to both assist in the design of a scaled-down test facility and guide the tests in order to mimic the natural circulation flow of its prototype. The natural circulation system generally consists of a heat source, the connecting pipes and several heat sinks. Although many applauding scaling methodologies have been proposed during last several decades, few works have been dedicated to systematically analyze and exactly preserve the hydraulic similarity. In the present study, the hydraulic similarity analyses are performed at both system and local level. By this mean, the scaling criteria for the exact hydraulic similarity in a full-pressure model have been sought. In other words, not only the system-level but also the local-level hydraulic similarities are pursued. As the hydraulic characteristics of a fluid system is governed by the momentum equation, the scaling analysis starts with it. A dimensionless integral loop momentum equation is derived to obtain the dimensionless numbers. In the dimensionless momentum equation, two dimensionless numbers, the dimensionless flow resistance number and the dimensionless gravitational force number, are identified along with a unique hydraulic time scale, characterizing the system hydraulic response. A full-height full-pressure model is also made to see which model among the full-height model and reduced-height model can preserve the hydraulic behavior of the prototype. From the dimensionless integral momentum equation, a unique hydraulic time scale, which characterizes the hydraulic response of a single-phase natural circulation system, is identified along with two dimensionless parameters: the dimensionless flow resistance number and the dimensionless gravitational force number. By satisfying the equality of both dimensionless numbers

  15. Scaling Analysis of the Single-Phase Natural Circulation: the Hydraulic Similarity

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Xin-Guo; Choi, Ki-Yong [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    These passive safety systems all rely on the natural circulation to cool down the reactor cores during an accident. Thus, a robust and accurate scaling methodology must be developed and employed to both assist in the design of a scaled-down test facility and guide the tests in order to mimic the natural circulation flow of its prototype. The natural circulation system generally consists of a heat source, the connecting pipes and several heat sinks. Although many applauding scaling methodologies have been proposed during last several decades, few works have been dedicated to systematically analyze and exactly preserve the hydraulic similarity. In the present study, the hydraulic similarity analyses are performed at both system and local level. By this mean, the scaling criteria for the exact hydraulic similarity in a full-pressure model have been sought. In other words, not only the system-level but also the local-level hydraulic similarities are pursued. As the hydraulic characteristics of a fluid system is governed by the momentum equation, the scaling analysis starts with it. A dimensionless integral loop momentum equation is derived to obtain the dimensionless numbers. In the dimensionless momentum equation, two dimensionless numbers, the dimensionless flow resistance number and the dimensionless gravitational force number, are identified along with a unique hydraulic time scale, characterizing the system hydraulic response. A full-height full-pressure model is also made to see which model among the full-height model and reduced-height model can preserve the hydraulic behavior of the prototype. From the dimensionless integral momentum equation, a unique hydraulic time scale, which characterizes the hydraulic response of a single-phase natural circulation system, is identified along with two dimensionless parameters: the dimensionless flow resistance number and the dimensionless gravitational force number. By satisfying the equality of both dimensionless numbers

  16. a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds

    Science.gov (United States)

    He, H.; Khoshelham, K.; Fraser, C.

    2017-09-01

    Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  17. A improved Network Security Situation Awareness Model

    Directory of Open Access Journals (Sweden)

    Li Fangwei

    2015-08-01

    Full Text Available In order to reflect the situation of network security assessment performance fully and accurately, a new network security situation awareness model based on information fusion was proposed. Network security situation is the result of fusion three aspects evaluation. In terms of attack, to improve the accuracy of evaluation, a situation assessment method of DDoS attack based on the information of data packet was proposed. In terms of vulnerability, a improved Common Vulnerability Scoring System (CVSS was raised and maked the assessment more comprehensive. In terms of node weights, the method of calculating the combined weights and optimizing the result by Sequence Quadratic Program (SQP algorithm which reduced the uncertainty of fusion was raised. To verify the validity and necessity of the method, a testing platform was built and used to test through evaluating 2000 DAPRA data sets. Experiments show that the method can improve the accuracy of evaluation results.

  18. Clinical phenotype-based gene prioritization: an initial study using semantic similarity and the human phenotype ontology.

    Science.gov (United States)

    Masino, Aaron J; Dechene, Elizabeth T; Dulik, Matthew C; Wilkens, Alisha; Spinner, Nancy B; Krantz, Ian D; Pennington, Jeffrey W; Robinson, Peter N; White, Peter S

    2014-07-21

    Exome sequencing is a promising method for diagnosing patients with a complex phenotype. However, variant interpretation relative to patient phenotype can be challenging in some scenarios, particularly clinical assessment of rare complex phenotypes. Each patient's sequence reveals many possibly damaging variants that must be individually assessed to establish clear association with patient phenotype. To assist interpretation, we implemented an algorithm that ranks a given set of genes relative to patient phenotype. The algorithm orders genes by the semantic similarity computed between phenotypic descriptors associated with each gene and those describing the patient. Phenotypic descriptor terms are taken from the Human Phenotype Ontology (HPO) and semantic similarity is derived from each term's information content. Model validation was performed via simulation and with clinical data. We simulated 33 Mendelian diseases with 100 patients per disease. We modeled clinical conditions by adding noise and imprecision, i.e. phenotypic terms unrelated to the disease and terms less specific than the actual disease terms. We ranked the causative gene against all 2488 HPO annotated genes. The median causative gene rank was 1 for the optimal and noise cases, 12 for the imprecision case, and 60 for the imprecision with noise case. Additionally, we examined a clinical cohort of subjects with hearing impairment. The disease gene median rank was 22. However, when also considering the patient's exome data and filtering non-exomic and common variants, the median rank improved to 3. Semantic similarity can rank a causative gene highly within a gene list relative to patient phenotype characteristics, provided that imprecision is mitigated. The clinical case results suggest that phenotype rank combined with variant analysis provides significant improvement over the individual approaches. We expect that this combined prioritization approach may increase accuracy and decrease effort for

  19. Distributional and Knowledge-Based Approaches for Computing Portuguese Word Similarity

    Directory of Open Access Journals (Sweden)

    Hugo Gonçalo Oliveira

    2018-02-01

    Full Text Available Identifying similar and related words is not only key in natural language understanding but also a suitable task for assessing the quality of computational resources that organise words and meanings of a language, compiled by different means. This paper, which aims to be a reference for those interested in computing word similarity in Portuguese, presents several approaches for this task and is motivated by the recent availability of state-of-the-art distributional models of Portuguese words, which add to several lexical knowledge bases (LKBs for this language, available for a longer time. The previous resources were exploited to answer word similarity tests, which also became recently available for Portuguese. We conclude that there are several valid approaches for this task, but not one that outperforms all the others in every single test. Distributional models seem to capture relatedness better, while LKBs are better suited for computing genuine similarity, but, in general, better results are obtained when knowledge from different sources is combined.

  20. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  1. Masked-Volume-Wise PCA and "reference Logan" illustrate similar regional differences in kinetic behavior in human brain PET study using [11C]-PIB

    Directory of Open Access Journals (Sweden)

    Engler Henry

    2009-01-01

    Full Text Available Abstract Background Kinetic modeling using reference Logan is commonly used to analyze data obtained from dynamic Positron Emission Tomography (PET studies on patients with Alzheimer's disease (AD and healthy volunteers (HVs using amyloid imaging agent N-methyl [11C]2-(4'-methylaminophenyl-6-hydroxy-benzothiazole, [11C]-PIB. The aim of the present study was to explore whether results obtained using the newly introduced method, Masked Volume Wise Principal Component Analysis, MVW-PCA, were similar to the results obtained using reference Logan. Methods MVW-PCA and reference Logan were performed on dynamic PET images obtained from four Alzheimer's disease (AD patients on two occasions (baseline and follow-up and on four healthy volunteers (HVs. Regions of interest (ROIs of similar sizes were positioned in different parts of the brain in both AD patients and HVs where the difference between AD patients and HVs is largest. Signal-to-noise ratio (SNR and discrimination power (DP were calculated for images generated by the different methods and the results were compared both qualitatively and quantitatively. Results MVW-PCA generated images that illustrated similar regional binding patterns compared to reference Logan images and with slightly higher quality, enhanced contrast, improved SNR and DP, without being based on modeling assumptions. MVW-PCA also generated additional MVW-PC images by using the whole dataset, which illustrated regions with different and uncorrelated kinetic behaviors of the administered tracer. This additional information might improve the understanding of kinetic behavior of the administered tracer. Conclusion MVW-PCA is a potential multivariate method that without modeling assumptions generates high quality images, which illustrated similar regional changes compared to modeling methods such as reference Logan. In addition, MVW-PCA could be used as a new technique, applicable not only on dynamic human brain studies but also on

  2. The effects of gravity on human walking: a new test of the dynamic similarity hypothesis using a predictive model.

    Science.gov (United States)

    Raichlen, David A

    2008-09-01

    The dynamic similarity hypothesis (DSH) suggests that differences in animal locomotor biomechanics are due mostly to differences in size. According to the DSH, when the ratios of inertial to gravitational forces are equal between two animals that differ in size [e.g. at equal Froude numbers, where Froude = velocity2/(gravity x hip height)], their movements can be made similar by multiplying all time durations by one constant, all forces by a second constant and all linear distances by a third constant. The DSH has been generally supported by numerous comparative studies showing that as inertial forces differ (i.e. differences in the centripetal force acting on the animal due to variation in hip heights), animals walk with dynamic similarity. However, humans walking in simulated reduced gravity do not walk with dynamically similar kinematics. The simulated gravity experiments did not completely account for the effects of gravity on all body segments, and the importance of gravity in the DSH requires further examination. This study uses a kinematic model to predict the effects of gravity on human locomotion, taking into account both the effects of gravitational forces on the upper body and on the limbs. Results show that dynamic similarity is maintained in altered gravitational environments. Thus, the DSH does account for differences in the inertial forces governing locomotion (e.g. differences in hip height) as well as differences in the gravitational forces governing locomotion.

  3. Agmatine Improves Cognitive Dysfunction and Prevents Cell Death in a Streptozotocin-Induced Alzheimer Rat Model

    Science.gov (United States)

    Song, Juhyun; Hur, Bo Eun; Bokara, Kiran Kumar; Yang, Wonsuk; Cho, Hyun Jin; Park, Kyung Ah; Lee, Won Taek; Lee, Kyoung Min

    2014-01-01

    Purpose Alzheimer's disease (AD) results in memory impairment and neuronal cell death in the brain. Previous studies demonstrated that intracerebroventricular administration of streptozotocin (STZ) induces pathological and behavioral alterations similar to those observed in AD. Agmatine (Agm) has been shown to exert neuroprotective effects in central nervous system disorders. In this study, we investigated whether Agm treatment could attenuate apoptosis and improve cognitive decline in a STZ-induced Alzheimer rat model. Materials and Methods We studied the effect of Agm on AD pathology using a STZ-induced Alzheimer rat model. For each experiment, rats were given anesthesia (chloral hydrate 300 mg/kg, ip), followed by a single injection of STZ (1.5 mg/kg) bilaterally into each lateral ventricle (5 µL/ventricle). Rats were injected with Agm (100 mg/kg) daily up to two weeks from the surgery day. Results Agm suppressed the accumulation of amyloid beta and enhanced insulin signal transduction in STZ-induced Alzheimer rats [experimetal control (EC) group]. Upon evaluation of cognitive function by Morris water maze testing, significant improvement of learning and memory dysfunction in the STZ-Agm group was observed compared with the EC group. Western blot results revealed significant attenuation of the protein expressions of cleaved caspase-3 and Bax, as well as increases in the protein expressions of Bcl2, PI3K, Nrf2, and γ-glutamyl cysteine synthetase, in the STZ-Agm group. Conclusion Our results showed that Agm is involved in the activation of antioxidant signaling pathways and activation of insulin signal transduction. Accordingly, Agm may be a promising therapeutic agent for improving cognitive decline and attenuating apoptosis in AD. PMID:24719136

  4. Levy Stable Processes. From Stationary to Self-Similar Dynamics and Back. An Application to Finance

    International Nuclear Information System (INIS)

    Burnecki, K.; Weron, A.

    2004-01-01

    We employ an ergodic theory argument to demonstrate the foundations of ubiquity of Levy stable self-similar processes in physics and present a class of models for anomalous and nonextensive diffusion. A relationship between stationary and self-similar models is clarified. The presented stochastic integral description of all Levy stable processes could provide new insights into the mechanism underlying a range of self-similar natural phenomena. Finally, this effect is illustrated by self-similar approach to financial modelling. (author)

  5. Semantic Similarity between Web Documents Using Ontology

    Science.gov (United States)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-06-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  6. Semantic Similarity between Web Documents Using Ontology

    Science.gov (United States)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-03-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  7. Statistical potential-based amino acid similarity matrices for aligning distantly related protein sequences.

    Science.gov (United States)

    Tan, Yen Hock; Huang, He; Kihara, Daisuke

    2006-08-15

    Aligning distantly related protein sequences is a long-standing problem in bioinformatics, and a key for successful protein structure prediction. Its importance is increasing recently in the context of structural genomics projects because more and more experimentally solved structures are available as templates for protein structure modeling. Toward this end, recent structure prediction methods employ profile-profile alignments, and various ways of aligning two profiles have been developed. More fundamentally, a better amino acid similarity matrix can improve a profile itself; thereby resulting in more accurate profile-profile alignments. Here we have developed novel amino acid similarity matrices from knowledge-based amino acid contact potentials. Contact potentials are used because the contact propensity to the other amino acids would be one of the most conserved features of each position of a protein structure. The derived amino acid similarity matrices are tested on benchmark alignments at three different levels, namely, the family, the superfamily, and the fold level. Compared to BLOSUM45 and the other existing matrices, the contact potential-based matrices perform comparably in the family level alignments, but clearly outperform in the fold level alignments. The contact potential-based matrices perform even better when suboptimal alignments are considered. Comparing the matrices themselves with each other revealed that the contact potential-based matrices are very different from BLOSUM45 and the other matrices, indicating that they are located in a different basin in the amino acid similarity matrix space.

  8. Observations and analysis of self-similar branching topology in glacier networks

    Science.gov (United States)

    Bahr, D.B.; Peckham, S.D.

    1996-01-01

    Glaciers, like rivers, have a branching structure which can be characterized by topological trees or networks. Probability distributions of various topological quantities in the networks are shown to satisfy the criterion for self-similarity, a symmetry structure which might be used to simplify future models of glacier dynamics. Two analytical methods of describing river networks, Shreve's random topology model and deterministic self-similar trees, are applied to the six glaciers of south central Alaska studied in this analysis. Self-similar trees capture the topological behavior observed for all of the glaciers, and most of the networks are also reasonably approximated by Shreve's theory. Copyright 1996 by the American Geophysical Union.

  9. Fedora Content Modelling for Improved Services for Research Databases

    DEFF Research Database (Denmark)

    Elbæk, Mikael Karstensen; Heller, Alfred; Pedersen, Gert Schmeltz

    A re-implementation of the research database of the Technical University of Denmark, DTU, is based on Fedora. The backbone consists of content models for primary and secondary entities and their relationships, giving flexible and powerful extraction capabilities for interoperability and reporting....... By adopting such an abstract data model, the platform enables new and improved services for researchers, librarians and administrators....

  10. Improvement of the projection models for radiogenic cancer risk

    International Nuclear Information System (INIS)

    Tong Jian

    2005-01-01

    Calculations of radiogenic cancer risk are based on the risk projection models for specific cancer sites. Improvement has been made for the parameters used in the previous models including introductions of mortality and morbidity risk coefficients, and age-/ gender-specific risk coefficients. These coefficients have been applied to calculate the radiogenic cancer risks for specific organs and radionuclides under different exposure scenarios. (authors)

  11. New Similarity Functions

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina

    2016-01-01

    spaces, in addition to their similarity in the vector space. Prioritized Weighted Feature Distance (PWFD) works similarly as WFD, but provides the ability to give priorities to desirable features. The accuracy of the proposed functions are compared with other similarity functions on several data sets....... Our results show that the proposed functions work better than other methods proposed in the literature....

  12. Protocol for the evaluation of a social franchising model to improve maternal health in Uttar Pradesh, India.

    Science.gov (United States)

    Pereira, Shreya K; Kumar, Paresh; Dutt, Varun; Haldar, Kaveri; Penn-Kekana, Loveday; Santos, Andreia; Powell-Jackson, Timothy

    2015-05-26

    Social franchising is the fastest growing market-based approach to organising and improving the quality of care in the private sector of low- and middle-income countries, but there is limited evidence on its impact and cost-effectiveness. The "Sky" social franchise model was introduced in the Indian state of Uttar Pradesh in late 2013. Difference-in-difference methods will be used to estimate the impact of the social franchise programme on the quality and coverage of health services along the continuum of care for reproductive, maternal and newborn health. Comparison clusters will be selected to be as similar as possible to intervention clusters using nearest neighbour matching methods. Two rounds of data will be collected from a household survey of 3600 women with a birth in the last 2 years and a survey of 450 health providers in the same localities. To capture the full range of effects, 59 study outcomes have been specified and then grouped into conceptually similar domains. Methods to account for multiple inferences will be used based on the pre-specified grouping of outcomes. A process evaluation will seek to understand the scale of the social franchise network, the extent to which various components of the programme are implemented and how impacts are achieved. An economic evaluation will measure the costs of setting up, maintaining and running the social franchise as well as the cost-effectiveness and financial sustainability of the programme. There is a dearth of evidence demonstrating whether market-based approaches such as social franchising can improve care in the private sector. This evaluation will provide rigorous evidence on whether an innovative model of social franchising can contribute to better population health in a low-income setting.

  13. Improved quantitative 90 Y bremsstrahlung SPECT/CT reconstruction with Monte Carlo scatter modeling.

    Science.gov (United States)

    Dewaraja, Yuni K; Chun, Se Young; Srinivasa, Ravi N; Kaza, Ravi K; Cuneo, Kyle C; Majdalany, Bill S; Novelli, Paula M; Ljungberg, Michael; Fessler, Jeffrey A

    2017-12-01

    In 90 Y microsphere radioembolization (RE), accurate post-therapy imaging-based dosimetry is important for establishing absorbed dose versus outcome relationships for developing future treatment planning strategies. Additionally, accurately assessing microsphere distributions is important because of concerns for unexpected activity deposition outside the liver. Quantitative 90 Y imaging by either SPECT or PET is challenging. In 90 Y SPECT model based methods are necessary for scatter correction because energy window-based methods are not feasible with the continuous bremsstrahlung energy spectrum. The objective of this work was to implement and evaluate a scatter estimation method for accurate 90 Y bremsstrahlung SPECT/CT imaging. Since a fully Monte Carlo (MC) approach to 90 Y SPECT reconstruction is computationally very demanding, in the present study the scatter estimate generated by a MC simulator was combined with an analytical projector in the 3D OS-EM reconstruction model. A single window (105 to 195-keV) was used for both the acquisition and the projector modeling. A liver/lung torso phantom with intrahepatic lesions and low-uptake extrahepatic objects was imaged to evaluate SPECT/CT reconstruction without and with scatter correction. Clinical application was demonstrated by applying the reconstruction approach to five patients treated with RE to determine lesion and normal liver activity concentrations using a (liver) relative calibration. There was convergence of the scatter estimate after just two updates, greatly reducing computational requirements. In the phantom study, compared with reconstruction without scatter correction, with MC scatter modeling there was substantial improvement in activity recovery in intrahepatic lesions (from > 55% to > 86%), normal liver (from 113% to 104%), and lungs (from 227% to 104%) with only a small degradation in noise (13% vs. 17%). Similarly, with scatter modeling contrast improved substantially both visually and in

  14. Predicting microRNA-disease associations using label propagation based on linear neighborhood similarity.

    Science.gov (United States)

    Li, Guanghui; Luo, Jiawei; Xiao, Qiu; Liang, Cheng; Ding, Pingjian

    2018-05-12

    Interactions between microRNAs (miRNAs) and diseases can yield important information for uncovering novel prognostic markers. Since experimental determination of disease-miRNA associations is time-consuming and costly, attention has been given to designing efficient and robust computational techniques for identifying undiscovered interactions. In this study, we present a label propagation model with linear neighborhood similarity, called LPLNS, to predict unobserved miRNA-disease associations. Additionally, a preprocessing step is performed to derive new interaction likelihood profiles that will contribute to the prediction since new miRNAs and diseases lack known associations. Our results demonstrate that the LPLNS model based on the known disease-miRNA associations could achieve impressive performance with an AUC of 0.9034. Furthermore, we observed that the LPLNS model based on new interaction likelihood profiles could improve the performance to an AUC of 0.9127. This was better than other comparable methods. In addition, case studies also demonstrated our method's outstanding performance for inferring undiscovered interactions between miRNAs and diseases, especially for novel diseases. Copyright © 2018. Published by Elsevier Inc.

  15. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    Science.gov (United States)

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  16. Are calanco landforms similar to river basins?

    Science.gov (United States)

    Caraballo-Arias, N A; Ferro, V

    2017-12-15

    In the past badlands have been often considered as ideal field laboratories for studying landscape evolution because of their geometrical similarity to larger fluvial systems. For a given hydrological process, no scientific proof exists that badlands can be considered a model of river basin prototypes. In this paper the measurements carried out on 45 Sicilian calanchi, a type of badlands that appears as a small-scale hydrographic unit, are used to establish their morphological similarity with river systems whose data are available in the literature. At first the geomorphological similarity is studied by identifying the dimensionless groups, which can assume the same value or a scaled one in a fixed ratio, representing drainage basin shape, stream network and relief properties. Then, for each property, the dimensionless groups are calculated for the investigated calanchi and the river basins and their corresponding scale ratio is evaluated. The applicability of Hack's, Horton's and Melton's laws for establishing similarity criteria is also tested. The developed analysis allows to conclude that a quantitative morphological similarity between calanco landforms and river basins can be established using commonly applied dimensionless groups. In particular, the analysis showed that i) calanchi and river basins have a geometrically similar shape respect to the parameters Rf and Re with a scale factor close to 1, ii) calanchi and river basins are similar respect to the bifurcation and length ratios (λ=1), iii) for the investigated calanchi the Melton number assumes values less than that (0.694) corresponding to the river case and a scale ratio ranging from 0.52 and 0.78 can be used, iv) calanchi and river basins have similar mean relief ratio values (λ=1.13) and v) calanchi present active geomorphic processes and therefore fall in a more juvenile stage with respect to river basins. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. An improved mechanistic critical heat flux model for subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    Based on the bubble coalescence adjacent to the heated wall as a flow structure for CHF condition, Chang and Lee developed a mechanistic critical heat flux (CHF) model for subcooled flow boiling. In this paper, improvements of Chang-Lee model are implemented with more solid theoretical bases for subcooled and low-quality flow boiling in tubes. Nedderman-Shearer`s equations for the skin friction factor and universal velocity profile models are employed. Slip effect of movable bubbly layer is implemented to improve the predictability of low mass flow. Also, mechanistic subcooled flow boiling model is used to predict the flow quality and void fraction. The performance of the present model is verified using the KAIST CHF database of water in uniformly heated tubes. It is found that the present model can give a satisfactory agreement with experimental data within less than 9% RMS error. 9 refs., 5 figs. (Author)

  18. An improved mechanistic critical heat flux model for subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    Based on the bubble coalescence adjacent to the heated wall as a flow structure for CHF condition, Chang and Lee developed a mechanistic critical heat flux (CHF) model for subcooled flow boiling. In this paper, improvements of Chang-Lee model are implemented with more solid theoretical bases for subcooled and low-quality flow boiling in tubes. Nedderman-Shearer`s equations for the skin friction factor and universal velocity profile models are employed. Slip effect of movable bubbly layer is implemented to improve the predictability of low mass flow. Also, mechanistic subcooled flow boiling model is used to predict the flow quality and void fraction. The performance of the present model is verified using the KAIST CHF database of water in uniformly heated tubes. It is found that the present model can give a satisfactory agreement with experimental data within less than 9% RMS error. 9 refs., 5 figs. (Author)

  19. PSA Model Improvement Using Maintenance Rule Function Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Mi Ro [KHNP-CRI, Nuclear Safety Laboratory, Daejeon (Korea, Republic of)

    2011-10-15

    The Maintenance Rule (MR) program, in nature, is a performance-based program. Therefore, the risk information derived from the Probabilistic Safety Assessment model is introduced into the MR program during the Safety Significance determination and Performance Criteria selection processes. However, this process also facilitates the determination of the vulnerabilities in currently utilized PSA models and offers means of improving them. To find vulnerabilities in an existing PSA model, an initial review determines whether the safety-related MR functions are included in the PSA model. Because safety-related MR functions are related to accident prevention and mitigation, it is generally necessary for them to be included in the PSA model. In the process of determining the safety significance of each functions, quantitative risk importance levels are determined through a process known as PSA model basic event mapping to MR functions. During this process, it is common for some inadequate and overlooked models to be uncovered. In this paper, the PSA model and the MR program of Wolsong Unit 1 were used as references

  20. Emergent self-similarity of cluster coagulation

    Science.gov (United States)

    Pushkin, Dmtiri O.

    A wide variety of nonequilibrium processes, such as coagulation of colloidal particles, aggregation of bacteria into colonies, coalescence of rain drops, bond formation between polymerization sites, and formation of planetesimals, fall under the rubric of cluster coagulation. We predict emergence of self-similar behavior in such systems when they are 'forced' by an external source of the smallest particles. The corresponding self-similar coagulation spectra prove to be power laws. Starting from the classical Smoluchowski coagulation equation, we identify the conditions required for emergence of self-similarity and show that the power-law exponent value for a particular coagulation mechanism depends on the homogeneity index of the corresponding coagulation kernel only. Next, we consider the current wave of mergers of large American banks as an 'unorthodox' application of coagulation theory. We predict that the bank size distribution has propensity to become a power law, and verify our prediction in a statistical study of the available economical data. We conclude this chapter by discussing economically significant phenomenon of capital condensation and predicting emergence of power-law distributions in other economical and social data. Finally, we turn to apparent semblance between cluster coagulation and turbulence and conclude that it is not accidental: both of these processes are instances of nonlinear cascades. This class of processes also includes river network formation models, certain force-chain models in granular mechanics, fragmentation due to collisional cascades, percolation, and growing random networks. We characterize a particular cascade by three indicies and show that the resulting power-law spectrum exponent depends on the indicies values only. The ensuing algebraic formula is remarkable for its simplicity.

  1. Improving patient handover between teams using a business improvement model: PDSA cycle.

    Science.gov (United States)

    Luther, Vishal; Hammersley, Daniel; Chekairi, Ahmed

    2014-01-01

    Medical admission units are continuously under pressure to move patients off the unit to outlying medical wards and allow for new admissions. In a typical district general hospital, doctors working in these medical wards reported that, on average, three patients each week arrived from the medical admission unit before any handover was received, and a further two patients arrived without any handover at all. A quality improvement project was therefore conducted using a 'Plan, Do, Study, Act' cycle model for improvement to address this issue. P - Plan: as there was no framework to support doctors with handover, a series of standard handover procedures were designed. D - Do: the procedures were disseminated to all staff, and championed by key stakeholders, including the clinical director and matron of the medical admission unit. S - STUDY: Measurements were repeated 3 months later and showed no change in the primary end points. A - ACT: The post take ward round sheet was redesigned, creating a checkbox for a medical admission unit doctor to document that handover had occurred. Nursing staff were prohibited from moving the patient off the ward until this had been completed. This later evolved into a separate handover sheet. Six months later, a repeat study revealed that only one patient each week was arriving before or without a verbal handover. Using a 'Plan, Do, Study, Act' business improvement tool helped to improve patient care.

  2. Improving the representation of radiation interception and photosynthesis for climate model applications

    International Nuclear Information System (INIS)

    Mercado, Lina M.; Huntingford, Chris; Gash, John H.C.; Cox, Peter M.; Jogireddy, Venkata

    2007-01-01

    The Joint UK Land Environment Simulator (JULES) (which is based on Met Office Surface Exchange Scheme MOSES), the land surface scheme of the Hadley Centre General Circulation Models (GCM) has been improved to contain an explicit description of light interception for different canopy levels, which consequently leads to a multilayer approach to scaling from leaf to canopy level photosynthesis. We test the improved JULES model at a site in the Amazonian rainforest by comparing against measurements of vertical profiles of radiation through the canopy, eddy covariance measurements of carbon and energy fluxes, and also measurements of carbon isotopic fractionation from top canopy leaves. Overall, the new light interception formulation improves modelled photosynthetic carbon uptake compared to the standard big leaf approach used in the original JULES formulation. Additional model improvement was not significant when incorporating more realistic vertical variation of photosynthetic capacity. Even with the improved representation of radiation interception, JULES simulations of net carbon uptake underestimate eddy covariance measurements by 14%. This discrepancy can be removed by either increasing the photosynthetic capacity throughout the canopy or by explicitly including light inhibition of leaf respiration. Along with published evidence of such inhibition of leaf respiration, our study suggests this effect should be considered for inclusion in other GCMs

  3. Collaborative Project: Building improved optimized parameter estimation algorithms to improve methane and nitrogen fluxes in a climate model

    Energy Technology Data Exchange (ETDEWEB)

    Mahowald, Natalie [Cornell Univ., Ithaca, NY (United States)

    2016-11-29

    Soils in natural and managed ecosystems and wetlands are well known sources of methane, nitrous oxides, and reactive nitrogen gases, but the magnitudes of gas flux to the atmosphere are still poorly constrained. Thus, the reasons for the large increases in atmospheric concentrations of methane and nitrous oxide since the preindustrial time period are not well understood. The low atmospheric concentrations of methane and nitrous oxide, despite being more potent greenhouse gases than carbon dioxide, complicate empirical studies to provide explanations. In addition to climate concerns, the emissions of reactive nitrogen gases from soils are important to the changing nitrogen balance in the earth system, subject to human management, and may change substantially in the future. Thus improved modeling of the emission fluxes of these species from the land surface is important. Currently, there are emission modules for methane and some nitrogen species in the Community Earth System Model’s Community Land Model (CLM-ME/N); however, there are large uncertainties and problems in the simulations, resulting in coarse estimates. In this proposal, we seek to improve these emission modules by combining state-of-the-art process modules for emissions, available data, and new optimization methods. In earth science problems, we often have substantial data and knowledge of processes in disparate systems, and thus we need to combine data and a general process level understanding into a model for projections of future climate that are as accurate as possible. The best methodologies for optimization of parameters in earth system models are still being developed. In this proposal we will develop and apply surrogate algorithms that a) were especially developed for computationally expensive simulations like CLM-ME/N models; b) were (in the earlier surrogate optimization Stochastic RBF) demonstrated to perform very well on computationally expensive complex partial differential equations in

  4. Self-similar structure in the distribution and density of the partition function zeros

    International Nuclear Information System (INIS)

    Huang, M.-C.; Luo, Y.-P.; Liaw, T.-M.

    2003-01-01

    Based on the knowledge of the partition function zeros for the cell-decorated triangular Ising model, we analyze the similar structures contained in the distribution pattern and density function of the zeros. The two own the same symmetries, and the arising of the similar structure in the road toward the infinite decoration-level is exhibited explicitly. The distinct features of the formation of the self-similar structure revealed from this model may be quite general

  5. Improvement and Application of the Softened Strut-and-Tie Model

    Science.gov (United States)

    Fan, Guoxi; Wang, Debin; Diao, Yuhong; Shang, Huaishuai; Tang, Xiaocheng; Sun, Hai

    2017-11-01

    Previous experimental researches indicate that reinforced concrete beam-column joints play an important role in the mechanical properties of moment resisting frame structures, so as to require proper design. The aims of this paper are to predict the joint carrying capacity and cracks development theoretically. Thus, a rational model needs to be developed. Based on the former considerations, the softened strut-and-tie model is selected to be introduced and analyzed. Four adjustments including modifications of the depth of the diagonal strut, the inclination angle of diagonal compression strut, the smeared stress of mild steel bars embedded in concrete, as well as the softening coefficient are made. After that, the carrying capacity of beam-column joint and cracks development are predicted using the improved softened strut-and-tie model. Based on the test results, it is not difficult to find that the improved softened strut-and-tie model can be used to predict the joint carrying capacity and cracks development with sufficient accuracy.

  6. Improvement for Amelioration Inventory Model with Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Han-Wen Tuan

    2017-01-01

    Full Text Available Most inventory models dealt with deteriorated items. On the contrary, just a few papers considered inventory systems under amelioration environment. We study an amelioration inventory model with Weibull distribution. However, there are some questionable results in the amelioration paper. We will first point out those questionable results in the previous paper that did not derive the optimal solution and then provide some improvements. We will provide a rigorous analytical work for different cases dependent on the size of the shape parameter. We present a detailed numerical example for different ranges of the sharp parameter to illustrate that our solution method attains the optimal solution. We developed a new amelioration model and then provided a detailed analyzed procedure to find the optimal solution. Our findings will help researchers develop their new inventory models.

  7. IMPROVEMENT OF MATHEMATICAL MODELS FOR ESTIMATION OF TRAIN DYNAMICS

    Directory of Open Access Journals (Sweden)

    L. V. Ursulyak

    2017-12-01

    Full Text Available Purpose. Using scientific publications the paper analyzes the mathematical models developed in Ukraine, CIS countries and abroad for theoretical studies of train dynamics and also shows the urgency of their further improvement. Methodology. Information base of the research was official full-text and abstract databases, scientific works of domestic and foreign scientists, professional periodicals, materials of scientific and practical conferences, methodological materials of ministries and departments. Analysis of publications on existing mathematical models used to solve a wide range of problems associated with the train dynamics study shows the expediency of their application. Findings. The results of these studies were used in: 1 design of new types of draft gears and air distributors; 2 development of methods for controlling the movement of conventional and connected trains; 3 creation of appropriate process flow diagrams; 4 development of energy-saving methods of train driving; 5 revision of the Construction Codes and Regulations (SNiP ΙΙ-39.76; 6 when selecting the parameters of the autonomous automatic control system, created in DNURT, for an auxiliary locomotive that is part of a connected train; 7 when creating computer simulators for the training of locomotive drivers; 8 assessment of the vehicle dynamic indices characterizing traffic safety. Scientists around the world conduct numerical experiments related to estimation of train dynamics using mathematical models that need to be constantly improved. Originality. The authors presented the main theoretical postulates that allowed them to develop the existing mathematical models for solving problems related to the train dynamics. The analysis of scientific articles published in Ukraine, CIS countries and abroad allows us to determine the most relevant areas of application of mathematical models. Practicalvalue. The practical value of the results obtained lies in the scientific validity

  8. Improvements to the RADIOM non-LTE model

    Science.gov (United States)

    Busquet, M.; Colombant, D.; Klapisch, M.; Fyfe, D.; Gardner, J.

    2009-12-01

    In 1993, we proposed the RADIOM model [M. Busquet, Phys. Fluids 85 (1993) 4191] where an ionization temperature T z is used to derive non-LTE properties from LTE data. T z is obtained from an "extended Saha equation" where unbalanced transitions, like radiative decay, give the non-LTE behavior. Since then, major improvements have been made. T z has been shown to be more than a heuristic value, but describes the actual distribution of excited and ionized states and can be understood as an "effective temperature". Therefore we complement the extended Saha equation by introducing explicitly the auto-ionization/dielectronic capture. Also we use the SCROLL model to benchmark the computed values of T z.

  9. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    Science.gov (United States)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous

  10. A strategy for improved computational efficiency of the method of anchored distributions

    Science.gov (United States)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  11. Tokunaga and Horton self-similarity for level set trees of Markov chains

    International Nuclear Information System (INIS)

    Zaliapin, Ilia; Kovchegov, Yevgeniy

    2012-01-01

    Highlights: ► Self-similar properties of the level set trees for Markov chains are studied. ► Tokunaga and Horton self-similarity are established for symmetric Markov chains and regular Brownian motion. ► Strong, distributional self-similarity is established for symmetric Markov chains with exponential jumps. ► It is conjectured that fractional Brownian motions are Tokunaga self-similar. - Abstract: The Horton and Tokunaga branching laws provide a convenient framework for studying self-similarity in random trees. The Horton self-similarity is a weaker property that addresses the principal branching in a tree; it is a counterpart of the power-law size distribution for elements of a branching system. The stronger Tokunaga self-similarity addresses so-called side branching. The Horton and Tokunaga self-similarity have been empirically established in numerous observed and modeled systems, and proven for two paradigmatic models: the critical Galton–Watson branching process with finite progeny and the finite-tree representation of a regular Brownian excursion. This study establishes the Tokunaga and Horton self-similarity for a tree representation of a finite symmetric homogeneous Markov chain. We also extend the concept of Horton and Tokunaga self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. We conjecture that fractional Brownian motions are also Tokunaga and Horton self-similar, with self-similarity parameters depending on the Hurst exponent.

  12. Study on unsteady tip leakage vortex cavitation in an axial-flow pump using an improved filter-based model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Desheng; Shi, Lei; Zhao, Ruijie; Shi, Weidong; Pan, Qiang [Jiangsu University, Zhenjiang (China); Esch, B. P. [Eindhoven University of Technology, Eindhoven (Netherlands)

    2017-02-15

    The aim of the present investigation is to simulate and analyze the tip leakage flow structure and instantaneous evolution of tip vortex cavitation in a scaled axial-flow pump model. The improved filter-based turbulence model based on the density correction and a homogeneous cavitation model were used for implementing this work. The results show that when entering into the tip clearance, the backward flow separates from the blade tip near the pressure side, resulting in the generation of a corner vortex with high magnitude of turbulence kinetic energy. Then, at the exit of the tip clearance, the leakage jets would re-attach on the blade tip wall. Moreover, the maximum swirling strength method was employed in identifying the TLV core and a counter-rotating induced vortex near the end-wall successfully. The three dimensional cavitation patterns and in-plain cavitation structures obtained by the improved numerical method agree well with the experimental results. At the sheet cavitation trailing edge in the tip region, the perpendicular cavitation cloud induced by TLV sheds and migrates toward the pressure side of the neighboring blade. During its migration, it breaks down abruptly and generates a large number of smallscale cavities, leading to severe degradation of the pump performance, which is similar with the phenomenon observed by Tan et al.

  13. Improving Predictive Modeling in Pediatric Drug Development: Pharmacokinetics, Pharmacodynamics, and Mechanistic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Slikker, William; Young, John F.; Corley, Rick A.; Dorman, David C.; Conolly, Rory B.; Knudsen, Thomas; Erstad, Brian L.; Luecke, Richard H.; Faustman, Elaine M.; Timchalk, Chuck; Mattison, Donald R.

    2005-07-26

    A workshop was conducted on November 18?19, 2004, to address the issue of improving predictive models for drug delivery to developing humans. Although considerable progress has been made for adult humans, large gaps remain for predicting pharmacokinetic/pharmacodynamic (PK/PD) outcome in children because most adult models have not been tested during development. The goals of the meeting included a description of when, during development, infants/children become adultlike in handling drugs. The issue of incorporating the most recent advances into the predictive models was also addressed: both the use of imaging approaches and genomic information were considered. Disease state, as exemplified by obesity, was addressed as a modifier of drug pharmacokinetics and pharmacodynamics during development. Issues addressed in this workshop should be considered in the development of new predictive and mechanistic models of drug kinetics and dynamics in the developing human.

  14. δ-Similar Elimination to Enhance Search Performance of Multiobjective Evolutionary Algorithms

    Science.gov (United States)

    Aguirre, Hernán; Sato, Masahiko; Tanaka, Kiyoshi

    In this paper, we propose δ-similar elimination to improve the search performance of multiobjective evolutionary algorithms in combinatorial optimization problems. This method eliminates similar individuals in objective space to fairly distribute selection among the different regions of the instantaneous Pareto front. We investigate four eliminating methods analyzing their effects using NSGA-II. In addition, we compare the search performance of NSGA-II enhanced by our method and NSGA-II enhanced by controlled elitism.

  15. Typical parameters of the plasma chemical similarity in non-isothermal reactive plasmas

    International Nuclear Information System (INIS)

    Gundermann, S.; Jacobs, H.; Miethke, F.; Rutsher, A.; Wagner, H.E.

    1996-01-01

    The substance of physical similarity principles is contained in parameters which govern the comparison of different realizations of a model device. Because similarity parameters for non-isothermal plasma chemical reactors are unknown to a great extent, an analysis of relevant equations is given together with some experimental results. Modelling of the reactor and experimental results for the ozone synthesis are presented

  16. Scaling, Similarity, and the Fourth Paradigm for Hydrology

    Science.gov (United States)

    Peters-Lidard, Christa D.; Clark, Martyn; Samaniego, Luis; Verhoest, Niko E. C.; van Emmerik, Tim; Uijlenhoet, Remko; Achieng, Kevin; Franz, Trenton E.; Woods, Ross

    2017-01-01

    In this synthesis paper addressing hydrologic scaling and similarity, we posit that roadblocks in the search for universal laws of hydrology are hindered by our focus on computational simulation (the third paradigm), and assert that it is time for hydrology to embrace a fourth paradigm of data-intensive science. Advances in information-based hydrologic science, coupled with an explosion of hydrologic data and advances in parameter estimation and modelling, have laid the foundation for a data-driven framework for scrutinizing hydrological scaling and similarity hypotheses. We summarize important scaling and similarity concepts (hypotheses) that require testing, describe a mutual information framework for testing these hypotheses, describe boundary condition, state flux, and parameter data requirements across scales to support testing these hypotheses, and discuss some challenges to overcome while pursuing the fourth hydrological paradigm. We call upon the hydrologic sciences community to develop a focused effort towards adopting the fourth paradigm and apply this to outstanding challenges in scaling and similarity.

  17. Applications of Location Similarity Measures and Conceptual Spaces to Event Coreference and Classification

    Science.gov (United States)

    McConky, Katie Theresa

    2013-01-01

    This work covers topics in event coreference and event classification from spoken conversation. Event coreference is the process of identifying descriptions of the same event across sentences, documents, or structured databases. Existing event coreference work focuses on sentence similarity models or feature based similarity models requiring slot…

  18. Assessment and improvement of condensation model in RELAP5/MOD3

    Energy Technology Data Exchange (ETDEWEB)

    Rho, Hui Cheon; Choi, Kee Yong; Park, Hyeon Sik; Kim, Sang Jae [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Lee, Sang Il [Korea Power Engineering Co., Inc., Seoul (Korea, Republic of)

    1997-07-15

    The objective of this research is to remove the uncertainty of the condensation model through the assessment and improvement of the various heat transfer correlations used in the RELAP5/MOD3 code. The condensation model of the standard RELAP5/MOD3 code is systematically arranged and analyzed. A condensation heat transfer database is constructed from the previous experimental data on various condensation phenomena. Based on the constructed database, the condensation models in the code are assessed and improved. An experiment on the reflux condensation in a tube of steam generator in the presence of noncondensable gases is planned to acquire the experimental data.

  19. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.; Smith, C.L.; Rasmuson, D.M.

    1994-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methodology, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  20. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    1995-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  1. Improving Project Management Using Formal Models and Architectures

    Science.gov (United States)

    Kahn, Theodore; Sturken, Ian

    2011-01-01

    This talk discusses the advantages formal modeling and architecture brings to project management. These emerging technologies have both great potential and challenges for improving information available for decision-making. The presentation covers standards, tools and cultural issues needing consideration, and includes lessons learned from projects the presenters have worked on.

  2. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  3. The improvement of the heat transfer model for sodium-water reaction jet code

    International Nuclear Information System (INIS)

    Hashiguchi, Yoshirou; Yamamoto, Hajime; Kamoshida, Norio; Murata, Shuuichi

    2001-02-01

    For confirming the reasonable DBL (Design Base Leak) on steam generator (SG), it is necessary to evaluate phenomena of sodium-water reaction (SWR) in an actual steam generator realistically. The improvement of a heat transfer model on sodium-water reaction (SWR) jet code (LEAP-JET ver.1.40) and application analysis to the water injection tests for confirmation of propriety for the code were performed. On the improvement of the code, the heat transfer model between a inside fluid and a tube wall was introduced instead of the prior model which was heat capacity model including both heat capacity of the tube wall and inside fluid. And it was considered that the fluid of inside the heat exchange tube was able to treat as water or sodium and typical heat transfer equations used in SG design were also introduced in the new heat transfer model. Further additional work was carried out in order to improve the stability of the calculation for long calculation time. The test calculation using the improved code (LEAP-JET ver.1.50) were carried out with conditions of the SWAT-IR·Run-HT-2 test. It was confirmed that the SWR jet behavior on the result and the influence to the result of the heat transfer model were reasonable. And also on the improved code (LEAP-JET ver.1.50), user's manual was revised with additional I/O manual and explanation of the heat transfer model and new variable name. (author)

  4. Surgical Process Improvement: Impact of a Standardized Care Model With Electronic Decision Support to Improve Compliance With SCIP Inf-9.

    Science.gov (United States)

    Cook, David J; Thompson, Jeffrey E; Suri, Rakesh; Prinsen, Sharon K

    2014-01-01

    The absence of standardization in surgical care process, exemplified in a "solution shop" model, can lead to unwarranted variation, increased cost, and reduced quality. A comprehensive effort was undertaken to improve quality of care around indwelling bladder catheter use following surgery by creating a "focused factory" model within the cardiac surgical practice. Baseline compliance with Surgical Care Improvement Inf-9, removal of urinary catheter by the end of surgical postoperative day 2, was determined. Comparison of baseline data to postintervention results showed clinically important reductions in the duration of indwelling bladder catheters as well as marked reduction in practice variation. Following the intervention, Surgical Care Improvement Inf-9 guidelines were met in 97% of patients. Although clinical quality improvement was notable, the process to accomplish this-identification of patients suitable for standardized pathways, protocol application, and electronic systems to support the standardized practice model-has potentially greater relevance than the specific clinical results. © 2013 by the American College of Medical Quality.

  5. Adjusting the Stems Regional Forest Growth Model to Improve Local Predictions

    Science.gov (United States)

    W. Brad Smith

    1983-01-01

    A simple procedure using double sampling is described for adjusting growth in the STEMS regional forest growth model to compensate for subregional variations. Predictive accuracy of the STEMS model (a distance-independent, individual tree growth model for Lake States forests) was improved by using this procedure

  6. Improving yaw dynamics by feedforward rear wheel steering

    NARCIS (Netherlands)

    Besselink, I.J.M.; Veldhuizen, T.J.; Nijmeijer, H.

    2008-01-01

    Active rear wheel steering can be applied to improve vehicle yaw dynamics. In this paper two possible control algorithms are discussed. The first method is a yaw rate feedback controller with a reference model, which has been reported in a similar form previously in literature. The second controller

  7. Coastal Improvements for Tide Models: The Impact of ALES Retracker

    Directory of Open Access Journals (Sweden)

    Gaia Piccioni

    2018-05-01

    Full Text Available Since the launch of the first altimetry satellites, ocean tide models have been improved dramatically for deep and shallow waters. However, issues are still found for areas of great interest for climate change investigations: the coastal regions. The purpose of this study is to analyze the influence of the ALES coastal retracker on tide modeling in these regions with respect to a standard open ocean retracker. The approach used to compute the tidal constituents is an updated and along-track version of the Empirical Ocean Tide model developed at DGFI-TUM. The major constituents are derived from a least-square harmonic analysis of sea level residuals based on the FES2014 tide model. The results obtained with ALES are compared with the ones estimated with the standard product. A lower fitting error is found for the ALES solution, especially for distances closer than 20 km from the coast. In comparison with in situ data, the root mean squared error computed with ALES can reach an improvement larger than 2 cm at single locations, with an average impact of over 10% for tidal constituents K 2 , O 1 , and P 1 . For Q 1 , the improvement is over 25%. It was observed that improvements to the root-sum squares are larger for distances closer than 10 km to the coast, independently on the sea state. Finally, the performance of the solutions changes according to the satellite’s flight direction: for tracks approaching land from open ocean root mean square differences larger than 1 cm are found in comparison to tracks going from land to ocean.

  8. Radiatively-driven winds: model improvements, ionization balance and the infared spectrum

    International Nuclear Information System (INIS)

    Castor, J.I.

    1979-01-01

    Recent improvements to theoretical stellar wind models and the results of empirical modelling of the ionization balance and the infrared continuum are discussed. The model of a wind driven by radiation pressure in spectral lines is improved by accounting for overlap of the driving lines, dependence of ionization balance on density, and stellar rotation. These effects produce a softer velocity law than that given by Castor, Abbott and Klein (1975). The ionization balance in zeta Puppis is shown to agree with that estimated for an optically thick wind at a gas temperature of 60,000 K. The ionization model is not unique. The infrared continuum of zeta Pup measured by Barlow and Cohen is fitted to a cool model with a linear rise of velocity with radius; this fit is also not unique. It is concluded that one should try to find a model that fits several kinds of evidence simultaneously. (Auth.)

  9. Chirped self-similar solutions of a generalized nonlinear Schroedinger equation

    Energy Technology Data Exchange (ETDEWEB)

    Fei Jin-Xi [Lishui Univ., Zhejiang (China). College of Mathematics and Physics; Zheng Chun-Long [Shaoguan Univ., Guangdong (China). School of Physics and Electromechanical Engineering; Shanghai Univ. (China). Shanghai Inst. of Applied Mathematics and Mechanics

    2011-01-15

    An improved homogeneous balance principle and an F-expansion technique are used to construct exact chirped self-similar solutions to the generalized nonlinear Schroedinger equation with distributed dispersion, nonlinearity, and gain coefficients. Such solutions exist under certain conditions and impose constraints on the functions describing dispersion, nonlinearity, and distributed gain function. The results show that the chirp function is related only to the dispersion coefficient, however, it affects all of the system parameters, which influence the form of the wave amplitude. As few characteristic examples and some simple chirped self-similar waves are presented. (orig.)

  10. Reversing the similarity effect: The effect of presentation format.

    Science.gov (United States)

    Cataldo, Andrea M; Cohen, Andrew L

    2018-06-01

    A context effect is a change in preference that occurs when alternatives are added to a choice set. Models of preferential choice that account for context effects largely assume a within-dimension comparison process. It has been shown, however, that the format in which a choice set is presented can influence comparison strategies. That is, a by-alternative or by-dimension grouping of the dimension values encourage within-alternative or within-dimension comparisons, respectively. For example, one classic context effect, the compromise effect, is strengthened by a by-dimension presentation format. Extrapolation from this result suggests that a second context effect, the similarity effect, will actually reverse when stimuli are presented in a by-dimension format. In the current study, we presented participants with a series of apartment choice sets designed to elicit the similarity effect, with either a by-alternative or by-dimension presentation format. Participants in the by-alternative condition demonstrated a standard similarity effect; however, participants in the by-dimension condition demonstrated a strong reverse similarity effect. The present data can be accounted for by Multialternative Decision Field Theory (MDFT) and the Multiattribute Linear Ballistic Accumulator (MLBA), but not Elimination by Aspects (EBA). Indeed, when some weak assumptions of within-dimension processes are met, MDFT and the MLBA predict the reverse similarity effect. These modeling results suggest that the similarity effect is governed by either forgetting and inhibition (MDFT), or attention to positive or negative differences (MLBA). These results demonstrate that flexibility in the comparison process needs to be incorporated into theories of preferential choice. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Development of Improved Mechanistic Deterioration Models for Flexible Pavements

    DEFF Research Database (Denmark)

    Ullidtz, Per; Ertman, Hans Larsen

    1998-01-01

    The paper describes a pilot study in Denmark with the main objective of developing improved mechanistic deterioration models for flexible pavements based on an accelerated full scale test on an instrumented pavement in the Danish Road Tessting Machine. The study was the first in "International...... Pavement Subgrade Performance Study" sponsored by the Federal Highway Administration (FHWA), USA. The paper describes in detail the data analysis and the resulting models for rutting, roughness, and a model for the plastic strain in the subgrade.The reader will get an understanding of the work needed...

  12. Improving breast cancer survival analysis through competition-based multidimensional modeling.

    Directory of Open Access Journals (Sweden)

    Erhan Bilal

    Full Text Available Breast cancer is the most common malignancy in women and is responsible for hundreds of thousands of deaths annually. As with most cancers, it is a heterogeneous disease and different breast cancer subtypes are treated differently. Understanding the difference in prognosis for breast cancer based on its molecular and phenotypic features is one avenue for improving treatment by matching the proper treatment with molecular subtypes of the disease. In this work, we employed a competition-based approach to modeling breast cancer prognosis using large datasets containing genomic and clinical information and an online real-time leaderboard program used to speed feedback to the modeling team and to encourage each modeler to work towards achieving a higher ranked submission. We find that machine learning methods combined with molecular features selected based on expert prior knowledge can improve survival predictions compared to current best-in-class methodologies and that ensemble models trained across multiple user submissions systematically outperform individual models within the ensemble. We also find that model scores are highly consistent across multiple independent evaluations. This study serves as the pilot phase of a much larger competition open to the whole research community, with the goal of understanding general strategies for model optimization using clinical and molecular profiling data and providing an objective, transparent system for assessing prognostic models.

  13. Improvement of the model for surface process of tritium release from lithium oxide

    International Nuclear Information System (INIS)

    Yamaki, Daiju; Iwamoto, Akira; Jitsukawa, Shiro

    2000-01-01

    Among the various tritium transport processes in lithium ceramics, the importance and the detailed mechanism of surface reactions remain to be elucidated. The dynamic adsorption and desorption model for tritium desorption from lithium ceramics, especially Li 2 O was constructed. From the experimental results, it was considered that both H 2 and H 2 O are dissociatively adsorbed on Li 2 O and generate OH - on the surface. In the first model developed in 1994, it was assumed that either the dissociative adsorption of H 2 or H 2 O on Li 2 O generates two OH - on the surface. However, recent calculation results show that the generation of one OH - and one H - is more stable than that of two OH - s by the dissociative adsorption of H 2 . Therefore, assumption of H 2 adsorption and desorption in the first model is improved and the tritium release behavior from Li 2 O surface is evaluated again by using the improved model. The tritium residence time on the Li 2 O surface is calculated using the improved model, and the results are compared with the experimental results. The calculation results using the improved model agree well with the experimental results than those using the first model

  14. Policy modeling for energy efficiency improvement in US industry

    International Nuclear Information System (INIS)

    Worrell, Ernst; Price, Lynn; Ruth, Michael

    2001-01-01

    We are at the beginning of a process of evaluating and modeling the contribution of policies to improve energy efficiency. Three recent policy studies trying to assess the impact of energy efficiency policies in the United States are reviewed. The studies represent an important step in the analysis of climate change mitigation strategies. All studies model the estimated policy impact, rather than the policy itself. Often the policy impacts are based on assumptions, as the effects of a policy are not certain. Most models only incorporate economic (or price) tools, which recent studies have proven to be insufficient to estimate the impacts, costs and benefits of mitigation strategies. The reviewed studies are a first effort to capture the effects of non-price policies. The studies contribute to a better understanding of the role of policies in improving energy efficiency and mitigating climate change. All policy scenarios results in substantial energy savings compared to the baseline scenario used, as well as substantial net benefits to the U.S. economy

  15. Improved SAFARI-1 research reactor irradiation position modeling in OSCAR-3 code system

    International Nuclear Information System (INIS)

    Moloko, L. E.; Belal, M. G. A. H.

    2009-01-01

    The demand on the availability of irradiation positions in the SAFARI-1 reactor is continuously increasing due to the commercial pressure to produce isotopes more efficiently. This calls for calculational techniques and modeling methods to be improved regularly to optimize irradiation services. The irradiation position models are improved using the OSCAR-3 code system, and results are compared to experimental measurements. It is concluded that the irradiation position models are essential if realistic core follow and reload studies are to be performed and most importantly, for the realization of improved agreement between experimental data and calculated results. (authors)

  16. Modeling task-specific neuronal ensembles improves decoding of grasp

    Science.gov (United States)

    Smith, Ryan J.; Soares, Alcimar B.; Rouse, Adam G.; Schieber, Marc H.; Thakor, Nitish V.

    2018-06-01

    Objective. Dexterous movement involves the activation and coordination of networks of neuronal populations across multiple cortical regions. Attempts to model firing of individual neurons commonly treat the firing rate as directly modulating with motor behavior. However, motor behavior may additionally be associated with modulations in the activity and functional connectivity of neurons in a broader ensemble. Accounting for variations in neural ensemble connectivity may provide additional information about the behavior being performed. Approach. In this study, we examined neural ensemble activity in primary motor cortex (M1) and premotor cortex (PM) of two male rhesus monkeys during performance of a center-out reach, grasp and manipulate task. We constructed point process encoding models of neuronal firing that incorporated task-specific variations in the baseline firing rate as well as variations in functional connectivity with the neural ensemble. Models were evaluated both in terms of their encoding capabilities and their ability to properly classify the grasp being performed. Main results. Task-specific ensemble models correctly predicted the performed grasp with over 95% accuracy and were shown to outperform models of neuronal activity that assume only a variable baseline firing rate. Task-specific ensemble models exhibited superior decoding performance in 82% of units in both monkeys (p  <  0.01). Inclusion of ensemble activity also broadly improved the ability of models to describe observed spiking. Encoding performance of task-specific ensemble models, measured by spike timing predictability, improved upon baseline models in 62% of units. Significance. These results suggest that additional discriminative information about motor behavior found in the variations in functional connectivity of neuronal ensembles located in motor-related cortical regions is relevant to decode complex tasks such as grasping objects, and may serve the basis for more

  17. The use of discrete-event simulation modelling to improve radiation therapy planning processes.

    Science.gov (United States)

    Werker, Greg; Sauré, Antoine; French, John; Shechter, Steven

    2009-07-01

    The planning portion of the radiation therapy treatment process at the British Columbia Cancer Agency is efficient but nevertheless contains room for improvement. The purpose of this study is to show how a discrete-event simulation (DES) model can be used to represent this complex process and to suggest improvements that may reduce the planning time and ultimately reduce overall waiting times. A simulation model of the radiation therapy (RT) planning process was constructed using the Arena simulation software, representing the complexities of the system. Several types of inputs feed into the model; these inputs come from historical data, a staff survey, and interviews with planners. The simulation model was validated against historical data and then used to test various scenarios to identify and quantify potential improvements to the RT planning process. Simulation modelling is an attractive tool for describing complex systems, and can be used to identify improvements to the processes involved. It is possible to use this technique in the area of radiation therapy planning with the intent of reducing process times and subsequent delays for patient treatment. In this particular system, reducing the variability and length of oncologist-related delays contributes most to improving the planning time.

  18. Improved ensemble-mean forecast skills of ENSO events by a zero-mean stochastic model-error model of an intermediate coupled model

    Science.gov (United States)

    Zheng, F.; Zhu, J.

    2015-12-01

    To perform an ensemble-based ENSO probabilistic forecast, the crucial issue is to design a reliable ensemble prediction strategy that should include the major uncertainties of a forecast system. In this study, we developed a new general ensemble perturbation technique to improve the ensemble-mean predictive skill of forecasting ENSO using an intermediate coupled model (ICM). The model uncertainties are first estimated and analyzed from EnKF analysis results through assimilating observed SST. Then, based on the pre-analyzed properties of the model errors, a zero-mean stochastic model-error model is developed to mainly represent the model uncertainties induced by some important physical processes missed in the coupled model (i.e., stochastic atmospheric forcing/MJO, extra-tropical cooling and warming, Indian Ocean Dipole mode, etc.). Each member of an ensemble forecast is perturbed by the stochastic model-error model at each step during the 12-month forecast process, and the stochastical perturbations are added into the modeled physical fields to mimic the presence of these high-frequency stochastic noises and model biases and their effect on the predictability of the coupled system. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr retrospective forecast experiments. The two forecast schemes are differentiated by whether they considered the model stochastic perturbations, with both initialized by the ensemble-mean analysis states from EnKF. The comparison results suggest that the stochastic model-error perturbations have significant and positive impacts on improving the ensemble-mean prediction skills during the entire 12-month forecast process. Because the nonlinear feature of the coupled model can induce the nonlinear growth of the added stochastic model errors with model integration, especially through the nonlinear heating mechanism with the vertical advection term of the model, the

  19. Design report on SCDAP/RELAP5 model improvements - debris bed and molten pool behavior

    International Nuclear Information System (INIS)

    Allison, C.M.; Rempe, J.L.; Chavez, S.A.

    1994-11-01

    The SCDAP/RELAP5/MOD3 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and in combination with VICTORIA, fission product release and transport during severe accidents. Improvements for existing debris bed and molten pool models in the SCDAP/RELAP5/MOD3.1 code are described in this report. Model improvements to address (a) debris bed formation, heating, and melting; (b) molten pool formation and growth; and (c) molten pool crust failure are discussed. Relevant data, existing models, proposed modeling changes, and the anticipated impact of the changes are discussed. Recommendations for the assessment of improved models are provided

  20. Event Shape Sorting: selecting events with similar evolution

    Directory of Open Access Journals (Sweden)

    Tomášik Boris

    2017-01-01

    Full Text Available We present novel method for the organisation of events. The method is based on comparing event-by-event histograms of a chosen quantity Q that is measured for each particle in every event. The events are organised in such a way that those with similar shape of the Q-histograms end-up placed close to each other. We apply the method on histograms of azimuthal angle of the produced hadrons in ultrarelativsitic nuclear collisions. By selecting events with similar azimuthal shape of their hadron distribution one chooses events which are likely that they underwent similar evolution from the initial state to the freeze-out. Such events can more easily be compared to theoretical simulations where all conditions can be controlled. We illustrate the method on data simulated by the AMPT model.

  1. Development of Improved Models and Designs for Coated-Particle Gas Reactor Fuels (I-NERI Annual Report)

    International Nuclear Information System (INIS)

    Petti, David Andrew; Maki, John Thomas; Languille, Alain; Martin, Philippe; Ballinger, Ronald

    2002-01-01

    The objective of this INERI project is to develop improved fuel behavior models for gas reactor coated particle fuels and to develop improved coated-particle fuel designs that can be used reliably at very high burnups and potentially in fast gas-cooled reactors. Thermomechanical, thermophysical, and physiochemical material properties data were compiled by both the US and the French and preliminary assessments conducted. Comparison between U.S. and European data revealed many similarities and a few important differences. In all cases, the data needed for accurate fuel performance modeling of coated particle fuel at high burnup were lacking. The development of the INEEL fuel performance model, PARFUME, continued from earlier efforts. The statistical model being used to simulate the detailed finite element calculations is being upgraded and improved to allow for changes in fuel design attributes (e.g. thickness of layers, dimensions of kernel) as well as changes in important material properties to increase the flexibility of the code. In addition, modeling of other potentially important failure modes such as debonding and asphericity was started. A paper on the status of the model was presented at the HTR-2002 meeting in Petten, Netherlands in April 2002, and a paper on the statistical method was submitted to the Journal of Nuclear Material in September 2002. Benchmarking of the model against Japanese and an older DRAGON irradiation are planned. Preliminary calculations of the stresses in a coated particle have been calculated by the CEA using the ATLAS finite element model. This model and the material properties and constitutive relationships will be incorporated into a more general software platform termed Pleiades. Pleiades will be able to analyze different fuel forms at different scales (from particle to fuel body) and also handle the statistical variability in coated particle fuel. Diffusion couple experiments to study Ag and Pd transport through SiC were

  2. Use of natural geochemical tracers to improve reservoir simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Huseby, O.; Chatzichristos, C.; Sagen, J.; Muller, J.; Kleven, R.; Bennett, B.; Larter, S.; Stubos, A.K.; Adler, P.M.

    2005-01-01

    This article introduces a methodology for integrating geochemical data in reservoir simulations to improve hydrocarbon reservoir models. The method exploits routine measurements of naturally existing inorganic ion concentration in hydrocarbon reservoir production wells, and uses the ions as non-partitioning water tracers. The methodology is demonstrated on a North Sea field case, using the field's reservoir model, together with geochemical information (SO{sub 4}{sup 2}, Mg{sup 2+} K{sup +}, Ba{sup 2+}, Sr{sup 2+}, Ca{sup 2+}, Cl{sup -} concentrations) from the field's producers. From the data-set we show that some of the ions behave almost as ideal sea-water tracers, i.e. without sorption to the matrix, ion-exchange with the matrix or scale-formation with other ions in the formation water. Moreover, the dataset shows that ion concentrations in pure formation-water vary according to formation. This information can be used to allocate produced water to specific water-producing zones in commingled production. Based on an evaluation of the applicability of the available data, one inorganic component, SO{sub 4}{sup 2}, is used as a natural seawater tracer. Introducing SO{sub 4}{sup 2} as a natural tracer in a tracer simulation has revealed a potential for improvements of the reservoir model. By tracking the injected seawater it was possible to identify underestimated fault lengths in the reservoir model. The demonstration confirms that geochemical data are valuable additional information for reservoir characterization, and shows that integration of geochemical data into reservoir simulation procedures can improve reservoir simulation models. (author)

  3. How similar are recognition memory and inductive reasoning?

    Science.gov (United States)

    Hayes, Brett K; Heit, Evan

    2013-07-01

    Conventionally, memory and reasoning are seen as different types of cognitive activities driven by different processes. In two experiments, we challenged this view by examining the relationship between recognition memory and inductive reasoning involving multiple forms of similarity. A common study set (members of a conjunctive category) was followed by a test set containing old and new category members, as well as items that matched the study set on only one dimension. The study and test sets were presented under recognition or induction instructions. In Experiments 1 and 2, the inductive property being generalized was varied in order to direct attention to different dimensions of similarity. When there was no time pressure on decisions, patterns of positive responding were strongly affected by property type, indicating that different types of similarity were driving recognition and induction. By comparison, speeded judgments showed weaker property effects and could be explained by generalization based on overall similarity. An exemplar model, GEN-EX (GENeralization from EXamples), could account for both the induction and recognition data. These findings show that induction and recognition share core component processes, even when the tasks involve flexible forms of similarity.

  4. QUALITY IMPROVEMENT MODEL OF NURSING EDUCATION IN MUHAMMADIYAH UNIVERSITIES TOWARD COMPETITIVE ADVANTAGE

    Directory of Open Access Journals (Sweden)

    Abdul Aziz Alimul Hidayat

    2017-06-01

    Full Text Available Introduction: Most of (90,6% nursing education quality in East Java was still low (BAN-PT, 2012. It was because the quality improvement process in nursing education generally was conducted partially (random performance improvement. The solution which might be done was through identifying proper quality improvement model in Nursing Education toward competitive advantage. Method: This research used survey to gain the data. The research sample was 16 Muhammadiyah Universities chosen using simple random sampling. The data were collected with questionnaires of 174 questions and documentation study. Data analysis used was Partial Least Square (PLS analysis technique. Result: Nursing education department profile in Muhammadiyah Universities in Indonesia showed of 10 years establishment, accredited B and the competition level in one city/regency was averagely more than three Universities becoming the competitors. Based on the quality improvement model analysis of nursing education toward competitive advantage on Muhammadiyah Universities, it was directly affected by the focus of learning and operasional process through human resources management improvement, on the other hand information system also directly affected on quality improvement, also affected quality process components; leadership, human resources, focus of learning and operational process. In improving human resources would be directly influenced with proper strategic planning. Strategic planning was directly influenced with leadership. Thus, in improving quality of nursing education, the leadership role of department, proper information system, and thehuman resources management improvement must be implemented.  Conclusion: Quality improvement model in nursing education was directly determined with learning and operational process through human resources management along with information system, strategic planning factors, and leadership. The research finding could be developed in quality

  5. Endocrinology Telehealth Consultation Improved Glycemic Control Similar to Face-to-Face Visits in Veterans.

    Science.gov (United States)

    Liu, Winnie; Saxon, David R; McNair, Bryan; Sanagorski, Rebecca; Rasouli, Neda

    2016-09-01

    Rates of diabetes for veterans who receive health care through the Veterans Health Administration are higher than rates in the general population. Furthermore, many veterans live in rural locations, far from Veterans Affairs (VA) hospitals, thus limiting their ability to readily seek face-to-face endocrinology care for diabetes. Telehealth (TH) technologies present an opportunity to improve access to specialty diabetes care for such patients; however, there is a lack of evidence regarding the ability of TH to improve glycemic control in comparison to traditional face-to-face consultations. This was a retrospective cohort study of all new endocrinology diabetes consultations at the Denver VA Medical Center over a 1-year period. A total of 189 patients were included in the analysis. In all, 85 patients had received face-to-face (FTF) endocrinology consultation for diabetes and 104 patients had received TH consultation. Subjects were mostly males (94.7%) and the mean age was 62.8 ± 10.1 years old. HbA1c improved from 9.76% (9.40% to 10.11%) to 8.55% (8.20% to 8.91%) (P Endocrinology TH consultations improved short-term glycemic control as effectively as traditional FTF visits in a veteran population with diabetes. © 2016 Diabetes Technology Society.

  6. Noise suppression for dual-energy CT via penalized weighted least-square optimization with similarity-based regularization

    Energy Technology Data Exchange (ETDEWEB)

    Harms, Joseph; Wang, Tonghe; Petrongolo, Michael; Zhu, Lei, E-mail: leizhu@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Niu, Tianye [Sir Run Run Shaw Hospital, Zhejiang University School of Medicine (China); Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, 310016 (China)

    2016-05-15

    Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is based on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan{sup ©}600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise

  7. Image magnification based on similarity analogy

    International Nuclear Information System (INIS)

    Chen Zuoping; Ye Zhenglin; Wang Shuxun; Peng Guohua

    2009-01-01

    Aiming at the high time complexity of the decoding phase in the traditional image enlargement methods based on fractal coding, a novel image magnification algorithm is proposed in this paper, which has the advantage of iteration-free decoding, by using the similarity analogy between an image and its zoom-out and zoom-in. A new pixel selection technique is also presented to further improve the performance of the proposed method. Furthermore, by combining some existing fractal zooming techniques, an efficient image magnification algorithm is obtained, which can provides the image quality as good as the state of the art while greatly decrease the time complexity of the decoding phase.

  8. Study for the design method of multi-agent diagnostic system to improve diagnostic performance for similar abnormality

    International Nuclear Information System (INIS)

    Minowa, Hirotsugu; Gofuku, Akio

    2014-01-01

    Accidents on industrial plants cause large loss on human, economic, social credibility. In recent, studies of diagnostic methods using techniques of machine learning such as support vector machine is expected to detect the occurrence of abnormality in a plant early and correctly. There were reported that these diagnostic machines has high accuracy to diagnose the operating state of industrial plant under mono abnormality occurrence. But the each diagnostic machine on the multi-agent diagnostic system may misdiagnose similar abnormalities as a same abnormality if abnormalities to diagnose increases. That causes that a single diagnostic machine may show higher diagnostic performance than one of multi-agent diagnostic system because decision-making considering with misdiagnosis is difficult. Therefore, we study the design method for multi-agent diagnostic system to diagnose similar abnormality correctly. This method aimed to realize automatic generation of diagnostic system where the generation process and location of diagnostic machines are optimized to diagnose correctly the similar abnormalities which are evaluated from the similarity of process signals by statistical method. This paper explains our design method and reports the result evaluated our method applied to the process data of the fast-breeder reactor Monju

  9. Improving Rice Modeling Success Rate with Ternary Non-structural Fertilizer Response Model.

    Science.gov (United States)

    Li, Juan; Zhang, Mingqing; Chen, Fang; Yao, Baoquan

    2018-06-13

    Fertilizer response modelling is an important technical approach to realize metrological fertilization on rice. With the goal of solving the problems of a low success rate of a ternary quadratic polynomial model (TPFM) and to expand the model's applicability, this paper established a ternary non-structural fertilizer response model (TNFM) based on the experimental results from N, P and K fertilized rice fields. Our research results showed that the TNFM significantly improved the modelling success rate by addressing problems arising from setting the bias and multicollinearity in a TPFM. The results from 88 rice field trials in China indicated that the proportion of typical TNFMs that satisfy the general fertilizer response law of plant nutrition was 40.9%, while the analogous proportion of TPFMs was only 26.1%. The recommended fertilization showed a significant positive linear correlation between the two models, and the parameters N 0 , P 0 and K 0 that estimated the value of soil supplying nutrient equivalents can be used as better indicators of yield potential in plots where no N or P or K fertilizer was applied. The theoretical analysis showed that the new model has a higher fitting accuracy and a wider application range.

  10. Improvement of airfoil trailing edge bluntness noise model

    Directory of Open Access Journals (Sweden)

    Wei Jun Zhu

    2016-02-01

    Full Text Available In this article, airfoil trailing edge bluntness noise is investigated using both computational aero-acoustic and semi-empirical approach. For engineering purposes, one of the most commonly used prediction tools for trailing edge noise are based on semi-empirical approaches, for example, the Brooks, Pope, and Marcolini airfoil noise prediction model developed by Brooks, Pope, and Marcolini (NASA Reference Publication 1218, 1989. It was found in previous study that the Brooks, Pope, and Marcolini model tends to over-predict noise at high frequencies. Furthermore, it was observed that this was caused by a lack in the model to predict accurately noise from blunt trailing edges. For more physical understanding of bluntness noise generation, in this study, we also use an advanced in-house developed high-order computational aero-acoustic technique to investigate the details associated with trailing edge bluntness noise. The results from the numerical model form the basis for an improved Brooks, Pope, and Marcolini trailing edge bluntness noise model.

  11. A TWO-STEP CLASSIFICATION APPROACH TO DISTINGUISHING SIMILAR OBJECTS IN MOBILE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    H. He

    2017-09-01

    Full Text Available Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  12. Improving Permafrost Hydrology Prediction Through Data-Model Integration

    Science.gov (United States)

    Wilson, C. J.; Andresen, C. G.; Atchley, A. L.; Bolton, W. R.; Busey, R.; Coon, E.; Charsley-Groffman, L.

    2017-12-01

    The CMIP5 Earth System Models were unable to adequately predict the fate of the 16GT of permafrost carbon in a warming climate due to poor representation of Arctic ecosystem processes. The DOE Office of Science Next Generation Ecosystem Experiment, NGEE-Arctic project aims to reduce uncertainty in the Arctic carbon cycle and its impact on the Earth's climate system by improved representation of the coupled physical, chemical and biological processes that drive how much buried carbon will be converted to CO2 and CH4, how fast this will happen, which form will dominate, and the degree to which increased plant productivity will offset increased soil carbon emissions. These processes fundamentally depend on permafrost thaw rate and its influence on surface and subsurface hydrology through thermal erosion, land subsidence and changes to groundwater flow pathways as soil, bedrock and alluvial pore ice and massive ground ice melts. LANL and its NGEE colleagues are co-developing data and models to better understand controls on permafrost degradation and improve prediction of the evolution of permafrost and its impact on Arctic hydrology. The LANL Advanced Terrestrial Simulator was built using a state of the art HPC software framework to enable the first fully coupled 3-dimensional surface-subsurface thermal-hydrology and land surface deformation simulations to simulate the evolution of the physical Arctic environment. Here we show how field data including hydrology, snow, vegetation, geochemistry and soil properties, are informing the development and application of the ATS to improve understanding of controls on permafrost stability and permafrost hydrology. The ATS is being used to inform parameterizations of complex coupled physical, ecological and biogeochemical processes for implementation in the DOE ACME land model, to better predict the role of changing Arctic hydrology on the global climate system. LA-UR-17-26566.

  13. Managing health care decisions and improvement through simulation modeling.

    Science.gov (United States)

    Forsberg, Helena Hvitfeldt; Aronsson, Håkan; Keller, Christina; Lindblad, Staffan

    2011-01-01

    Simulation modeling is a way to test changes in a computerized environment to give ideas for improvements before implementation. This article reviews research literature on simulation modeling as support for health care decision making. The aim is to investigate the experience and potential value of such decision support and quality of articles retrieved. A literature search was conducted, and the selection criteria yielded 59 articles derived from diverse applications and methods. Most met the stated research-quality criteria. This review identified how simulation can facilitate decision making and that it may induce learning. Furthermore, simulation offers immediate feedback about proposed changes, allows analysis of scenarios, and promotes communication on building a shared system view and understanding of how a complex system works. However, only 14 of the 59 articles reported on implementation experiences, including how decision making was supported. On the basis of these articles, we proposed steps essential for the success of simulation projects, not just in the computer, but also in clinical reality. We also presented a novel concept combining simulation modeling with the established plan-do-study-act cycle for improvement. Future scientific inquiries concerning implementation, impact, and the value for health care management are needed to realize the full potential of simulation modeling.

  14. Flooding Experiments and Modeling for Improved Reactor Safety

    International Nuclear Information System (INIS)

    Solmos, M.; Hogan, K.J.; VIerow, K.

    2008-01-01

    Countercurrent two-phase flow and 'flooding' phenomena in light water reactor systems are being investigated experimentally and analytically to improve reactor safety of current and future reactors. The aspects that will be better clarified are the effects of condensation and tube inclination on flooding in large diameter tubes. The current project aims to improve the level of understanding of flooding mechanisms and to develop an analysis model for more accurate evaluations of flooding in the pressurizer surge line of a Pressurized Water Reactor (PWR). Interest in flooding has recently increased because Countercurrent Flow Limitation (CCFL) in the AP600 pressurizer surge line can affect the vessel refill rate following a small break LOCA and because analysis of hypothetical severe accidents with the current flooding models in reactor safety codes shows that these models represent the largest uncertainty in analysis of steam generator tube creep rupture. During a hypothetical station blackout without auxiliary feedwater recovery, should the hot leg become voided, the pressurizer liquid will drain to the hot leg and flooding may occur in the surge line. The flooding model heavily influences the pressurizer emptying rate and the potential for surge line structural failure due to overheating and creep rupture. The air-water test results in vertical tubes are presented in this paper along with a semi-empirical correlation for the onset of flooding. The unique aspects of the study include careful experimentation on large-diameter tubes and an integrated program in which air-water testing provides benchmark knowledge and visualization data from which to conduct steam-water testing

  15. Audio Query by Example Using Similarity Measures between Probability Density Functions of Features

    Directory of Open Access Journals (Sweden)

    Marko Helén

    2010-01-01

    Full Text Available This paper proposes a query by example system for generic audio. We estimate the similarity of the example signal and the samples in the queried database by calculating the distance between the probability density functions (pdfs of their frame-wise acoustic features. Since the features are continuous valued, we propose to model them using Gaussian mixture models (GMMs or hidden Markov models (HMMs. The models parametrize each sample efficiently and retain sufficient information for similarity measurement. To measure the distance between the models, we apply a novel Euclidean distance, approximations of Kullback-Leibler divergence, and a cross-likelihood ratio test. The performance of the measures was tested in simulations where audio samples are automatically retrieved from a general audio database, based on the estimated similarity to a user-provided example. The simulations show that the distance between probability density functions is an accurate measure for similarity. Measures based on GMMs or HMMs are shown to produce better results than that of the existing methods based on simpler statistics or histograms of the features. A good performance with low computational cost is obtained with the proposed Euclidean distance.

  16. SimilarityExplorer: A visual inter-comparison tool for multifaceted climate data

    Science.gov (United States)

    J. Poco; A. Dasgupta; Y. Wei; W. Hargrove; C. Schwalm; R. Cook; E. Bertini; C. Silva

    2014-01-01

    Inter-comparison and similarity analysis to gauge consensus among multiple simulation models is a critical visualization problem for understanding climate change patterns. Climate models, specifically, Terrestrial Biosphere Models (TBM) represent time and space variable ecosystem processes, for example, simulations of photosynthesis and respiration, using algorithms...

  17. Improvement of a near wake model for trailing vorticity

    DEFF Research Database (Denmark)

    Pirrung, Georg; Hansen, Morten Hartvig; Aagaard Madsen, Helge

    2014-01-01

    A near wake model, originally proposed by Beddoes, is further developed. The purpose of the model is to account for the radially dependent time constants of the fast aerodynamic response and to provide a tip loss correction. It is based on lifting line theory and models the downwash due to roughly...... the first 90 degrees of rotation. This restriction of the model to the near wake allows for using a computationally efficient indicial function algorithm. The aim of this study is to improve the accuracy of the downwash close to the root and tip of the blade and to decrease the sensitivity of the model...... to temporal discretization, both regarding numerical stability and quality of the results. The modified near wake model is coupled to an aerodynamics model, which consists of a blade element momentum model with dynamic inflow for the far wake and a 2D shed vorticity model that simulates the unsteady buildup...

  18. Using isotopes to improve impact and hydrological predictions of land-surface schemes in global climate models

    International Nuclear Information System (INIS)

    McGuffie, K.; Henderson-Sellers, A.

    2002-01-01

    Global climate model (GCM) predictions of the impact of large-scale land-use change date back to 1984 as do the earliest isotopic studies of large-basin hydrology. Despite this coincidence in interest and geography, with both papers focussed on the Amazon, there have been few studies that have tried to exploit isotopic information with the goal of improving climate model simulations of the land-surface. In this paper we analyze isotopic results from the IAEA global data base specifically with the goal of identifying signatures of potential value for improving global and regional climate model simulations of the land-surface. Evaluation of climate model predictions of the impacts of deforestation of the Amazon has been shown to be of significance by recent results which indicate impacts occurring distant from the Amazon i.e. tele-connections causing climate change elsewhere around the globe. It is suggested that these could be similar in magnitude and extent to the global impacts of ENSO events. Validation of GCM predictions associated with Amazonian deforestation are increasingly urgently required because of the additional effects of other aspects of climate change, particularly synergies occurring between forest removal and greenhouse gas increases, especially CO 2 . Here we examine three decades distributions of deuterium excess across the Amazon and use the results to evaluate the relative importance of the fractionating (partial evaporation) and non-fractionating (transpiration) processes. These results illuminate GCM scenarios of importance to the regional climate and hydrology: (i) the possible impact of increased stomatal resistance in the rainforest caused by higher levels of atmospheric CO2 [4]; and (ii) the consequences of the combined effects of deforestation and global warming on the regions climate and hydrology

  19. Improved Hydrology over Peatlands in a Global Land Modeling System

    Science.gov (United States)

    Bechtold, M.; Delannoy, G.; Reichle, R.; Koster, R.; Mahanama, S.; Roose, Dirk

    2018-01-01

    Peatlands of the Northern Hemisphere represent an important carbon pool that mainly accumulated since the last ice age under permanently wet conditions in specific geological and climatic settings. The carbon balance of peatlands is closely coupled to water table dynamics. Consequently, the future carbon balance over peatlands is strongly dependent on how hydrology in peatlands will react to changing boundary conditions, e.g. due to climate change or regional water level drawdown of connected aquifers or streams. Global land surface modeling over organic-rich regions can provide valuable global-scale insights on where and how peatlands are in transition due to changing boundary conditions. However, the current global land surface models are not able to reproduce typical hydrological dynamics in peatlands well. We implemented specific structural and parametric changes to account for key hydrological characteristics of peatlands into NASA's GEOS-5 Catchment Land Surface Model (CLSM, Koster et al. 2000). The main modifications pertain to the modeling of partial inundation, and the definition of peatland-specific runoff and evapotranspiration schemes. We ran a set of simulations on a high performance cluster using different CLSM configurations and validated the results with a newly compiled global in-situ dataset of water table depths in peatlands. The results demonstrate that an update of soil hydraulic properties for peat soils alone does not improve the performance of CLSM over peatlands. However, structural model changes for peatlands are able to improve the skill metrics for water table depth. The validation results for the water table depth indicate a reduction of the bias from 2.5 to 0.2 m, and an improvement of the temporal correlation coefficient from 0.5 to 0.65, and from 0.4 to 0.55 for the anomalies. Our validation data set includes both bogs (rain-fed) and fens (ground and/or surface water influence) and reveals that the metrics improved less for fens. In

  20. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Science.gov (United States)

    Chlumecký, Martin; Buchtele, Josef; Richta, Karel

    2017-10-01

    The efficient calibration of rainfall-runoff models is a difficult issue, even for experienced hydrologists. Therefore, fast and high-quality model calibration is a valuable improvement. This paper describes a novel methodology and software for the optimisation of a rainfall-runoff modelling using a genetic algorithm (GA) with a newly prepared concept of a random number generator (HRNG), which is the core of the optimisation. The GA estimates model parameters using evolutionary principles, which requires a quality number generator. The new HRNG generates random numbers based on hydrological information and it provides better numbers compared to pure software generators. The GA enhances the model calibration very well and the goal is to optimise the calibration of the model with a minimum of user interaction. This article focuses on improving the internal structure of the GA, which is shielded from the user. The results that we obtained indicate that the HRNG provides a stable trend in the output quality of the model, despite various configurations of the GA. In contrast to previous research, the HRNG speeds up the calibration of the model and offers an improvement of rainfall-runoff modelling.

  1. Improvement of a Robotic Manipulator Model Based on Multivariate Residual Modeling

    Directory of Open Access Journals (Sweden)

    Serge Gale

    2017-07-01

    Full Text Available A new method is presented for extending a dynamic model of a six degrees of freedom robotic manipulator. A non-linear multivariate calibration of input–output training data from several typical motion trajectories is carried out with the aim of predicting the model systematic output error at time (t + 1 from known input reference up till and including time (t. A new partial least squares regression (PLSR based method, nominal PLSR with interactions was developed and used to handle, unmodelled non-linearities. The performance of the new method is compared with least squares (LS. Different cross-validation schemes were compared in order to assess the sampling of the state space based on conventional trajectories. The method developed in the paper can be used as fault monitoring mechanism and early warning system for sensor failure. The results show that the suggested methods improves trajectory tracking performance of the robotic manipulator by extending the initial dynamic model of the manipulator.

  2. A Novel Approach to Semantic Similarity Measurement Based on a Weighted Concept Lattice: Exemplifying Geo-Information

    Directory of Open Access Journals (Sweden)

    Jia Xiao

    2017-11-01

    Full Text Available The measurement of semantic similarity has been widely recognized as having a fundamental and key role in information science and information systems. Although various models have been proposed to measure semantic similarity, these models are not able effectively to quantify the weights of relevant factors that impact on the judgement of semantic similarity, such as the attributes of concepts, application context, and concept hierarchy. In this paper, we propose a novel approach that comprehensively considers the effects of various factors on semantic similarity judgment, which we name semantic similarity measurement based on a weighted concept lattice (SSMWCL. A feature model and network model are integrated together in SSMWCL. Based on the feature model, the combined weight of each attribute of the concepts is calculated by merging its information entropy and inclusion-degree importance in a specific application context. By establishing the weighted concept lattice, the relative hierarchical depths of concepts for comparison are computed according to the principle of the network model. The integration of feature model and network model enables SSMWCL to take account of differences in concepts more comprehensively in semantic similarity measurement. Additionally, a workflow of SSMWCL is designed to demonstrate these procedures and a case study of geo-information is conducted to assess the approach.

  3. Developing models of how cognitive improvements change functioning: Mediation, moderation and moderated mediation

    Science.gov (United States)

    Wykes, Til; Reeder, Clare; Huddy, Vyv; Taylor, Rumina; Wood, Helen; Ghirasim, Natalia; Kontis, Dimitrios; Landau, Sabine

    2012-01-01

    Background Cognitive remediation (CRT) affects functioning but the extent and type of cognitive improvements necessary are unknown. Aim To develop and test models of how cognitive improvement transfers to work behaviour using the data from a current service. Method Participants (N49) with a support worker and a paid or voluntary job were offered CRT in a Phase 2 single group design with three assessments: baseline, post therapy and follow-up. Working memory, cognitive flexibility, planning and work outcomes were assessed. Results Three models were tested (mediation — cognitive improvements drive functioning improvement; moderation — post treatment cognitive level affects the impact of CRT on functioning; moderated mediation — cognition drives functioning improvements only after a certain level is achieved). There was evidence of mediation (planning improvement associated with improved work quality). There was no evidence that cognitive flexibility (total Wisconsin Card Sorting Test errors) and working memory (Wechsler Adult Intelligence Scale III digit span) mediated work functioning despite significant effects. There was some evidence of moderated mediation for planning improvement if participants had poorer memory and/or made fewer WCST errors. The total CRT effect on work quality was d = 0.55, but the indirect (planning-mediated CRT effect) was d = 0.082 Conclusion Planning improvements led to better work quality but only accounted for a small proportion of the total effect on work outcome. Other specific and non-specific effects of CRT and the work programme are likely to account for some of the remaining effect. This is the first time complex models have been tested and future Phase 3 studies need to further test mediation and moderated mediation models. PMID:22503640

  4. Prioritization of candidate disease genes by topological similarity between disease and protein diffusion profiles.

    Science.gov (United States)

    Zhu, Jie; Qin, Yufang; Liu, Taigang; Wang, Jun; Zheng, Xiaoqi

    2013-01-01

    Identification of gene-phenotype relationships is a fundamental challenge in human health clinic. Based on the observation that genes causing the same or similar phenotypes tend to correlate with each other in the protein-protein interaction network, a lot of network-based approaches were proposed based on different underlying models. A recent comparative study showed that diffusion-based methods achieve the state-of-the-art predictive performance. In this paper, a new diffusion-based method was proposed to prioritize candidate disease genes. Diffusion profile of a disease was defined as the stationary distribution of candidate genes given a random walk with restart where similarities between phenotypes are incorporated. Then, candidate disease genes are prioritized by comparing their diffusion profiles with that of the disease. Finally, the effectiveness of our method was demonstrated through the leave-one-out cross-validation against control genes from artificial linkage intervals and randomly chosen genes. Comparative study showed that our method achieves improved performance compared to some classical diffusion-based methods. To further illustrate our method, we used our algorithm to predict new causing genes of 16 multifactorial diseases including Prostate cancer and Alzheimer's disease, and the top predictions were in good consistent with literature reports. Our study indicates that integration of multiple information sources, especially the phenotype similarity profile data, and introduction of global similarity measure between disease and gene diffusion profiles are helpful for prioritizing candidate disease genes. Programs and data are available upon request.

  5. Multi-scale structural similarity index for motion detection

    Directory of Open Access Journals (Sweden)

    M. Abdel-Salam Nasr

    2017-07-01

    Full Text Available The most recent approach for measuring the image quality is the structural similarity index (SSI. This paper presents a novel algorithm based on the multi-scale structural similarity index for motion detection (MS-SSIM in videos. The MS-SSIM approach is based on modeling of image luminance, contrast and structure at multiple scales. The MS-SSIM has resulted in much better performance than the single scale SSI approach but at the cost of relatively lower processing speed. The major advantages of the presented algorithm are both: the higher detection accuracy and the quasi real-time processing speed.

  6. Prediction of earth rotation parameters based on improved weighted least squares and autoregressive model

    Directory of Open Access Journals (Sweden)

    Sun Zhangzhen

    2012-08-01

    Full Text Available In this paper, an improved weighted least squares (WLS, together with autoregressive (AR model, is proposed to improve prediction accuracy of earth rotation parameters(ERP. Four weighting schemes are developed and the optimal power e for determination of the weight elements is studied. The results show that the improved WLS-AR model can improve the ERP prediction accuracy effectively, and for different prediction intervals of ERP, different weight scheme should be chosen.

  7. Improved lumped models for transient combined convective and radiative cooling of multi-layer composite slabs

    International Nuclear Information System (INIS)

    An Chen; Su Jian

    2011-01-01

    Improved lumped parameter models were developed for the transient heat conduction in multi-layer composite slabs subjected to combined convective and radiative cooling. The improved lumped models were obtained through two-point Hermite approximations for integrals. Transient combined convective and radiative cooling of three-layer composite slabs was analyzed to illustrate the applicability of the proposed lumped models, with respect to different values of the Biot numbers, the radiation-conduction parameter, the dimensionless thermal contact resistances, the dimensionless thickness, and the dimensionless thermal conductivity. It was shown by comparison with numerical solution of the original distributed parameter model that the higher order lumped model (H 1,1 /H 0,0 approximation) yielded significant improvement of average temperature prediction over the classical lumped model. In addition, the higher order (H 1,1 /H 0,0 ) model was applied to analyze the transient heat conduction problem of steel-concrete-steel sandwich plates. - Highlights: → Improved lumped models for convective-radiative cooling of multi-layer slabs were developed. → Two-point Hermite approximations for integrals were employed. → Significant improvement over classical lumped model was achieved. → The model can be applied to high Biot number and high radiation-conduction parameter. → Transient heat conduction in steel-concrete-steel sandwich pipes was analyzed as an example.

  8. Recent Improvements to the Calibration Models for RXTE/PCA

    Science.gov (United States)

    Jahoda, K.

    2008-01-01

    We are updating the calibration of the PCA to correct for slow variations, primarily in energy to channel relationship. We have also improved the physical model in the vicinity of the Xe K-edge, which should increase the reliability of continuum fits above 20 keV. The improvements to the matrix are especially important to simultaneous observations, where the PCA is often used to constrain the continuum while other higher resolution spectrometers are used to study the shape of lines and edges associated with Iron.

  9. A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction

    Science.gov (United States)

    Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.

    2017-03-01

    There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.

  10. Modeling Recognition Memory Using the Similarity Structure of Natural Input

    Science.gov (United States)

    Lacroix, Joyca P. W.; Murre, Jaap M. J.; Postma, Eric O.; van den Herik, H. Jaap

    2006-01-01

    The natural input memory (NAM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During recognition, the model compares incoming preprocessed…

  11. Personality traits across countries: Support for similarities rather than differences.

    Science.gov (United States)

    Kajonius, Petri; Mac Giolla, Erik

    2017-01-01

    In the current climate of migration and globalization, personality characteristics of individuals from different countries have received a growing interest. Previous research has established reliable differences in personality traits across countries. The present study extends this research by examining 30 personality traits in 22 countries, based on an online survey in English with large national samples (NTotal = 130,602). The instrument used was a comprehensive, open-source measure of the Five Factor Model (FFM) (IPIP-NEO-120). We postulated that differences in personality traits between countries would be small, labeling this a Similarities Hypothesis. We found support for this in three stages. First, similarities across countries were observed for model fits for each of the five personality trait structures. Second, within-country sex differences for the five personality traits showed similar patterns across countries. Finally, the overall the contribution to personality traits from countries was less than 2%. In other words, the relationship between a country and an individual's personality traits, however interesting, are small. We conclude that the most parsimonious explanation for the current and past findings is a cross-country personality Similarities Hypothesis.

  12. K-Line Patterns’ Predictive Power Analysis Using the Methods of Similarity Match and Clustering

    Directory of Open Access Journals (Sweden)

    Lv Tao

    2017-01-01

    Full Text Available Stock price prediction based on K-line patterns is the essence of candlestick technical analysis. However, there are some disputes on whether the K-line patterns have predictive power in academia. To help resolve the debate, this paper uses the data mining methods of pattern recognition, pattern clustering, and pattern knowledge mining to research the predictive power of K-line patterns. The similarity match model and nearest neighbor-clustering algorithm are proposed for solving the problem of similarity match and clustering of K-line series, respectively. The experiment includes testing the predictive power of the Three Inside Up pattern and Three Inside Down pattern with the testing dataset of the K-line series data of Shanghai 180 index component stocks over the latest 10 years. Experimental results show that (1 the predictive power of a pattern varies a great deal for different shapes and (2 each of the existing K-line patterns requires further classification based on the shape feature for improving the prediction performance.

  13. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    Science.gov (United States)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  14. Analyzing and leveraging self-similarity for variable resolution atmospheric models

    Science.gov (United States)

    O'Brien, Travis; Collins, William

    2015-04-01

    Variable resolution modeling techniques are rapidly becoming a popular strategy for achieving high resolution in a global atmospheric models without the computational cost of global high resolution. However, recent studies have demonstrated a variety of resolution-dependent, and seemingly artificial, features. We argue that the scaling properties of the atmosphere are key to understanding how the statistics of an atmospheric model should change with resolution. We provide two such examples. In the first example we show that the scaling properties of the cloud number distribution define how the ratio of resolved to unresolved clouds should increase with resolution. We show that the loss of resolved clouds, in the high resolution region of variable resolution simulations, with the Community Atmosphere Model version 4 (CAM4) is an artifact of the model's treatment of condensed water (this artifact is significantly reduced in CAM5). In the second example we show that the scaling properties of the horizontal velocity field, combined with the incompressibility assumption, necessarily result in an intensification of vertical mass flux as resolution increases. We show that such an increase is present in a wide variety of models, including CAM and the regional climate models of the ENSEMBLES intercomparision. We present theoretical arguments linking this increase to the intensification of precipitation with increasing resolution.

  15. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    Science.gov (United States)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  16. Improved longitudinal gray and white matter atrophy assessment via application of a 4-dimensional hidden Markov random field model.

    Science.gov (United States)

    Dwyer, Michael G; Bergsland, Niels; Zivadinov, Robert

    2014-04-15

    SIENA and similar techniques have demonstrated the utility of performing "direct" measurements as opposed to post-hoc comparison of cross-sectional data for the measurement of whole brain (WB) atrophy over time. However, gray matter (GM) and white matter (WM) atrophy are now widely recognized as important components of neurological disease progression, and are being actively evaluated as secondary endpoints in clinical trials. Direct measures of GM/WM change with advantages similar to SIENA have been lacking. We created a robust and easily-implemented method for direct longitudinal analysis of GM/WM atrophy, SIENAX multi-time-point (SIENAX-MTP). We built on the basic halfway-registration and mask composition components of SIENA to improve the raw output of FMRIB's FAST tissue segmentation tool. In addition, we created LFAST, a modified version of FAST incorporating a 4th dimension in its hidden Markov random field model in order to directly represent time. The method was validated by scan-rescan, simulation, comparison with SIENA, and two clinical effect size comparisons. All validation approaches demonstrated improved longitudinal precision with the proposed SIENAX-MTP method compared to SIENAX. For GM, simulation showed better correlation with experimental volume changes (r=0.992 vs. 0.941), scan-rescan showed lower standard deviations (3.8% vs. 8.4%), correlation with SIENA was more robust (r=0.70 vs. 0.53), and effect sizes were improved by up to 68%. Statistical power estimates indicated a potential drop of 55% in the number of subjects required to detect the same treatment effect with SIENAX-MTP vs. SIENAX. The proposed direct GM/WM method significantly improves on the standard SIENAX technique by trading a small amount of bias for a large reduction in variance, and may provide more precise data and additional statistical power in longitudinal studies. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. MODIS Data Assimilation in the CROPGRO model for improving soybean yield estimations

    Science.gov (United States)

    Richetti, J.; Monsivais-Huertero, A.; Ahmad, I.; Judge, J.

    2017-12-01

    Soybean is one of the main agricultural commodities in the world. Thus, having better estimates of its agricultural production is important. Improving the soybean crop models in Brazil is crucial for better understanding of the soybean market and enhancing decision making, because Brazil is the second largest soybean producer in the world, Parana state is responsible for almost 20% of it, and by itself would be the fourth greatest soybean producer in the world. Data assimilation techniques provide a method to improve spatio-temporal continuity of crops through integration of remotely sensed observations and crop growth models. This study aims to use MODIS EVI to improve DSSAT-CROPGRO soybean yield estimations in the Parana state, southern Brazil. The method uses the Ensemble Kalman filter which assimilates MODIS Terra and Aqua combined products (MOD13Q1 and MYD13Q1) into the CROPGRO model to improve the agricultural production estimates through update of light interception data over time. Expected results will be validated with monitored commercial farms during the period of 2013-2014.

  18. The Agriculture Model Intercomparison and Improvement Project (AgMIP) (Invited)

    Science.gov (United States)

    Rosenzweig, C.

    2010-12-01

    The Agricultural Model Intercomparison and Improvement Project (AgMIP) is a distributed climate-scenario simulation exercise for historical model intercomparison and future climate change conditions with participation of multiple crop and world agricultural trade modeling groups around the world. The goals of AgMIP are to improve substantially the characterization of risk of hunger and world food security due to climate change and to enhance adaptation capacity in both developing and developed countries. Historical period results will spur model improvement and interaction among major modeling groups, while future period results will lead directly to tests of adaptation and mitigation strategies across a range of scales. AgMIP will consist of a multi-scale impact assessment utilizing the latest methods for climate and agricultural scenario generation. Scenarios and modeling protocols will be distributed on the web, and multi-model results will be collated and analyzed to ensure the widest possible coverage of agricultural crops and regions. AgMIP will place regional changes in agricultural production in a global context that reflects new trading opportunities, imbalances, and shortages in world markets resulting from climate change and other driving forces for food supply. Such projections are essential inputs from the Vulnerability, Impacts, and Adaptation (VIA) research community to the Intergovernmental Panel on Climate Change Fifth Assessment (AR5), now underway, and the UN Framework Convention on Climate Change. They will set the context for local-scale vulnerability and adaptation studies, supply test scenarios for national-scale development of trade policy instruments, provide critical information on changing supply and demand for water resources, and elucidate interactive effects of climate change and land use change. AgMIP will not only provide crucially-needed new global estimates of how climate change will affect food supply and hunger in the

  19. Effects of Self-Similar Collisions in the Theory of Pressure Broadening and Shift

    International Nuclear Information System (INIS)

    Kharintsev, S.S.; Salakhov, M.Kh.

    1999-01-01

    In the present paper the self-similar collision model is developed in terms of fractal Brownian motion. Within this model framework, collisions are assumed to carry a non-Markovian character and, therefore, possible memory collisional effects are not taken into account. Applying a self-similar collision model for the motion of the radiator and Anderson-Talman phase-shift theory of collisional broadening, a general formula for the correlation function in the impact limit is described. (author)

  20. An Improved MUSIC Model for Gibbsite Surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott C.; Bickmore, Barry R.; Tadanier, Christopher J.; Rosso, Kevin M.

    2004-06-01

    Here we use gibbsite as a model system with which to test a recently published, bond-valence method for predicting intrinsic pKa values for surface functional groups on oxides. At issue is whether the method is adequate when valence parameters for the functional groups are derived from ab initio structure optimization of surfaces terminated by vacuum. If not, ab initio molecular dynamics (AIMD) simulations of solvated surfaces (which are much more computationally expensive) will have to be used. To do this, we had to evaluate extant gibbsite potentiometric titration data that where some estimate of edge and basal surface area was available. Applying BET and recently developed atomic force microscopy methods, we found that most of these data sets were flawed, in that their surface area estimates were probably wrong. Similarly, there may have been problems with many of the titration procedures. However, one data set was adequate on both counts, and we applied our method of surface pKa int prediction to fitting a MUSIC model to this data with considerable success—several features of the titration data were predicted well. However, the model fit was certainly not perfect, and we experienced some difficulties optimizing highly charged, vacuum-terminated surfaces. Therefore, we conclude that we probably need to do AIMD simulations of solvated surfaces to adequately predict intrinsic pKa values for surface functional groups.