WorldWideScience

Sample records for model similar improvements

  1. Modeling of similar economies

    Directory of Open Access Journals (Sweden)

    Sergey B. Kuznetsov

    2017-06-01

    Full Text Available Objective to obtain dimensionless criteria ndash economic indices characterizing the national economy and not depending on its size. Methods mathematical modeling theory of dimensions processing statistical data. Results basing on differential equations describing the national economy with the account of economical environment resistance two dimensionless criteria are obtained which allow to compare economies regardless of their sizes. With the theory of dimensions we show that the obtained indices are not accidental. We demonstrate the implementation of the obtained dimensionless criteria for the analysis of behavior of certain countriesrsquo economies. Scientific novelty the dimensionless criteria are obtained ndash economic indices which allow to compare economies regardless of their sizes and to analyze the dynamic changes in the economies with time. nbsp Practical significance the obtained results can be used for dynamic and comparative analysis of different countriesrsquo economies regardless of their sizes.

  2. Self-similar cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Chao, W Z [Cambridge Univ. (UK). Dept. of Applied Mathematics and Theoretical Physics

    1981-07-01

    The kinematics and dynamics of self-similar cosmological models are discussed. The degrees of freedom of the solutions of Einstein's equations for different types of models are listed. The relation between kinematic quantities and the classifications of the self-similarity group is examined. All dust local rotational symmetry models have been found.

  3. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  4. Continuous Improvement and Collaborative Improvement: Similarities and Differences

    DEFF Research Database (Denmark)

    Middel, Rick; Boer, Harry; Fisscher, Olaf

    2006-01-01

    the similarities and differences between key components of continuous and collaborative improvement by assessing what is specific for continuous improvement, what for collaborative improvement, and where the two areas of application meet and overlap. The main conclusions are that there are many more similarities...... between continuous and collaborative improvement. The main differences relate to the role of hierarchy/market, trust, power and commitment to collaboration, all of which are related to differences between the settings in which continuous and collaborative improvement unfold....

  5. Lagrangian-similarity diffusion-deposition model

    International Nuclear Information System (INIS)

    Horst, T.W.

    1979-01-01

    A Lagrangian-similarity diffusion model has been incorporated into the surface-depletion deposition model. This model predicts vertical concentration profiles far downwind of the source that agree with those of a one-dimensional gradient-transfer model

  6. Notions of similarity for systems biology models.

    Science.gov (United States)

    Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knüpfer, Christian; Liebermeister, Wolfram; Waltemath, Dagmar

    2018-01-01

    Systems biology models are rapidly increasing in complexity, size and numbers. When building large models, researchers rely on software tools for the retrieval, comparison, combination and merging of models, as well as for version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of 'similarity' may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here we survey existing methods for the comparison of models, introduce quantitative measures for model similarity, and discuss potential applications of combined similarity measures. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on a combination of different model aspects. The six aspects that we define as potentially relevant for similarity are underlying encoding, references to biological entities, quantitative behaviour, qualitative behaviour, mathematical equations and parameters and network structure. We argue that future similarity measures will benefit from combining these model aspects in flexible, problem-specific ways to mimic users' intuition about model similarity, and to support complex model searches in databases. © The Author 2016. Published by Oxford University Press.

  7. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  8. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  9. Improved cosine similarity measures of simplified neutrosophic setsfor medical diagnoses

    OpenAIRE

    Jun Ye

    2014-01-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematicaltool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophicsets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based oncosine function, including single valued neutrosophic cosine similarity measures and interval neutro-sophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced...

  10. On self-similar Tolman models

    International Nuclear Information System (INIS)

    Maharaj, S.D.

    1988-01-01

    The self-similar spherically symmetric solutions of the Einstein field equation for the case of dust are identified. These form a subclass of the Tolman models. These self-similar models contain the solution recently presented by Chi [J. Math. Phys. 28, 1539 (1987)], thereby refuting the claim of having found a new solution to the Einstein field equations

  11. Similarity and Modeling in Science and Engineering

    CERN Document Server

    Kuneš, Josef

    2012-01-01

    The present text sets itself in relief to other titles on the subject in that it addresses the means and methodologies versus a narrow specific-task oriented approach. Concepts and their developments which evolved to meet the changing needs of applications are addressed. This approach provides the reader with a general tool-box to apply to their specific needs. Two important tools are presented: dimensional analysis and the similarity analysis methods. The fundamental point of view, enabling one to sort all models, is that of information flux between a model and an original expressed by the similarity and abstraction. Each chapter includes original examples and ap-plications. In this respect, the models can be divided into several groups. The following models are dealt with separately by chapter; mathematical and physical models, physical analogues, deterministic, stochastic, and cybernetic computer models. The mathematical models are divided into asymptotic and phenomenological models. The phenomenological m...

  12. A Novel Hybrid Similarity Calculation Model

    Directory of Open Access Journals (Sweden)

    Xiaoping Fan

    2017-01-01

    Full Text Available This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.

  13. Continuous Improvement and Collaborative Improvement: Similarities and Differences

    NARCIS (Netherlands)

    Middel, H.G.A.; Boer, Harm; Fisscher, O.A.M.

    2006-01-01

    A substantial body of theoretical and practical knowledge has been developed on continuous improvement. However, there is still a considerable lack of empirically grounded contributions and theories on collaborative improvement, that is, continuous improvement in an inter-organizational setting. The

  14. Improved personalized recommendation based on a similarity network

    Science.gov (United States)

    Wang, Ximeng; Liu, Yun; Xiong, Fei

    2016-08-01

    A recommender system helps individual users find the preferred items rapidly and has attracted extensive attention in recent years. Many successful recommendation algorithms are designed on bipartite networks, such as network-based inference or heat conduction. However, most of these algorithms define the resource-allocation methods for an average allocation. That is not reasonable because average allocation cannot indicate the user choice preference and the influence between users which leads to a series of non-personalized recommendation results. We propose a personalized recommendation approach that combines the similarity function and bipartite network to generate a similarity network that improves the resource-allocation process. Our model introduces user influence into the recommender system and states that the user influence can make the resource-allocation process more reasonable. We use four different metrics to evaluate our algorithms for three benchmark data sets. Experimental results show that the improved recommendation on a similarity network can obtain better accuracy and diversity than some competing approaches.

  15. Improved collaborative filtering recommendation algorithm of similarity measure

    Science.gov (United States)

    Zhang, Baofu; Yuan, Baoping

    2017-05-01

    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  16. Modeling Timbre Similarity of Short Music Clips.

    Science.gov (United States)

    Siedenburg, Kai; Müllensiefen, Daniel

    2017-01-01

    There is evidence from a number of recent studies that most listeners are able to extract information related to song identity, emotion, or genre from music excerpts with durations in the range of tenths of seconds. Because of these very short durations, timbre as a multifaceted auditory attribute appears as a plausible candidate for the type of features that listeners make use of when processing short music excerpts. However, the importance of timbre in listening tasks that involve short excerpts has not yet been demonstrated empirically. Hence, the goal of this study was to develop a method that allows to explore to what degree similarity judgments of short music clips can be modeled with low-level acoustic features related to timbre. We utilized the similarity data from two large samples of participants: Sample I was obtained via an online survey, used 16 clips of 400 ms length, and contained responses of 137,339 participants. Sample II was collected in a lab environment, used 16 clips of 800 ms length, and contained responses from 648 participants. Our model used two sets of audio features which included commonly used timbre descriptors and the well-known Mel-frequency cepstral coefficients as well as their temporal derivates. In order to predict pairwise similarities, the resulting distances between clips in terms of their audio features were used as predictor variables with partial least-squares regression. We found that a sparse selection of three to seven features from both descriptor sets-mainly encoding the coarse shape of the spectrum as well as spectrotemporal variability-best predicted similarities across the two sets of sounds. Notably, the inclusion of non-acoustic predictors of musical genre and record release date allowed much better generalization performance and explained up to 50% of shared variance ( R 2 ) between observations and model predictions. Overall, the results of this study empirically demonstrate that both acoustic features related

  17. Inter Genre Similarity Modelling For Automatic Music Genre Classification

    OpenAIRE

    Bagci, Ulas; Erzin, Engin

    2009-01-01

    Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population....

  18. Face and body recognition show similar improvement during childhood.

    Science.gov (United States)

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Similarity search of business process models

    NARCIS (Netherlands)

    Dumas, M.; García-Bañuelos, L.; Dijkman, R.M.

    2009-01-01

    Similarity search is a general class of problems in which a given object, called a query object, is compared against a collection of objects in order to retrieve those that most closely resemble the query object. This paper reviews recent work on an instance of this class of problems, where the

  20. Measuring similarity between business process models

    NARCIS (Netherlands)

    Dongen, van B.F.; Dijkman, R.M.; Mendling, J.

    2007-01-01

    Quality aspects become increasingly important when business process modeling is used in a large-scale enterprise setting. In order to facilitate a storage without redundancy and an efficient retrieval of relevant process models in model databases it is required to develop a theoretical understanding

  1. Quasi-Similarity Model of Synthetic Jets

    Czech Academy of Sciences Publication Activity Database

    Tesař, Václav; Kordík, Jozef

    2009-01-01

    Roč. 149, č. 2 (2009), s. 255-265 ISSN 0924-4247 R&D Projects: GA AV ČR IAA200760705; GA ČR GA101/07/1499 Institutional research plan: CEZ:AV0Z20760514 Keywords : jets * synthetic jets * similarity solution Subject RIV: BK - Fluid Dynamics Impact factor: 1.674, year: 2009 http://www.sciencedirect.com

  2. Improving Language Production Using Subtitled Similar Task Videos

    Science.gov (United States)

    Arslanyilmaz, Abdurrahman; Pedersen, Susan

    2010-01-01

    This study examines the effects of subtitled similar task videos on language production by nonnative speakers (NNSs) in an online task-based language learning (TBLL) environment. Ten NNS-NNS dyads collaboratively completed four communicative tasks, using an online TBLL environment specifically designed for this study and a chat tool in…

  3. Agile rediscovering values: Similarities to continuous improvement strategies

    Science.gov (United States)

    Díaz de Mera, P.; Arenas, J. M.; González, C.

    2012-04-01

    Research in the late 80's on technological companies that develop products of high value innovation, with sufficient speed and flexibility to adapt quickly to changing market conditions, gave rise to the new set of methodologies known as Agile Management Approach. In the current changing economic scenario, we considered very interesting to study the similarities of these Agile Methodologies with other practices whose effectiveness has been amply demonstrated in both the West and Japan. Strategies such as Kaizen, Lean, World Class Manufacturing, Concurrent Engineering, etc, would be analyzed to check the values they have in common with the Agile Approach.

  4. Similar words analysis based on POS-CBOW language model

    Directory of Open Access Journals (Sweden)

    Dongru RUAN

    2015-10-01

    Full Text Available Similar words analysis is one of the important aspects in the field of natural language processing, and it has important research and application values in text classification, machine translation and information recommendation. Focusing on the features of Sina Weibo's short text, this paper presents a language model named as POS-CBOW, which is a kind of continuous bag-of-words language model with the filtering layer and part-of-speech tagging layer. The proposed approach can adjust the word vectors' similarity according to the cosine similarity and the word vectors' part-of-speech metrics. It can also filter those similar words set on the base of the statistical analysis model. The experimental result shows that the similar words analysis algorithm based on the proposed POS-CBOW language model is better than that based on the traditional CBOW language model.

  5. Self-Similar Symmetry Model and Cosmic Microwave Background

    Directory of Open Access Journals (Sweden)

    Tomohide eSonoda

    2016-05-01

    Full Text Available In this paper, we present the self-similar symmetry (SSS model that describes the hierarchical structure of the universe. The model is based on the concept of self-similarity, which explains the symmetry of the cosmic microwave background (CMB. The approximate length and time scales of the six hierarchies of the universe---grand unification, electroweak unification, the atom, the pulsar, the solar system, and the galactic system---are derived from the SSS model. In addition, the model implies that the electron mass and gravitational constant could vary with the CMB radiation temperature.

  6. Simulation and similarity using models to understand the world

    CERN Document Server

    Weisberg, Michael

    2013-01-01

    In the 1950s, John Reber convinced many Californians that the best way to solve the state's water shortage problem was to dam up the San Francisco Bay. Against massive political pressure, Reber's opponents persuaded lawmakers that doing so would lead to disaster. They did this not by empirical measurement alone, but also through the construction of a model. Simulation and Similarity explains why this was a good strategy while simultaneously providing an account of modeling and idealization in modern scientific practice. Michael Weisberg focuses on concrete, mathematical, and computational models in his consideration of the nature of models, the practice of modeling, and nature of the relationship between models and real-world phenomena. In addition to a careful analysis of physical, computational, and mathematical models, Simulation and Similarity offers a novel account of the model/world relationship. Breaking with the dominant tradition, which favors the analysis of this relation through logical notions suc...

  7. Perception of similarity: a model for social network dynamics

    International Nuclear Information System (INIS)

    Javarone, Marco Alberto; Armano, Giuliano

    2013-01-01

    Some properties of social networks (e.g., the mixing patterns and the community structure) appear deeply influenced by the individual perception of people. In this work we map behaviors by considering similarity and popularity of people, also assuming that each person has his/her proper perception and interpretation of similarity. Although investigated in different ways (depending on the specific scientific framework), from a computational perspective similarity is typically calculated as a distance measure. In accordance with this view, to represent social network dynamics we developed an agent-based model on top of a hyperbolic space on which individual distance measures are calculated. Simulations, performed in accordance with the proposed model, generate small-world networks that exhibit a community structure. We deem this model to be valuable for analyzing the relevant properties of real social networks. (paper)

  8. Brazilian N2 laser similar to imported models

    International Nuclear Information System (INIS)

    Santos, P.A.M. dos; Tavares Junior, A.D.; Silva Reis, H. da; Tagliaferri, A.A.; Massone, C.A.

    1981-09-01

    The development of a high power N 2 Laser, similar to imported models but built enterely with Brazilian materials is described. The prototype shows pulse repetitivity that varies from 1 to 50 per second and has a peak power of 500 kW. (Author) [pt

  9. Bianchi VI0 and III models: self-similar approach

    International Nuclear Information System (INIS)

    Belinchon, Jose Antonio

    2009-01-01

    We study several cosmological models with Bianchi VI 0 and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and Λ. As in other studied models we find that the behaviour of G and Λ are related. If G behaves as a growing time function then Λ is a positive decreasing time function but if G is decreasing then Λ 0 is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  10. Morphological similarities between DBM and a microeconomic model of sprawl

    Science.gov (United States)

    Caruso, Geoffrey; Vuidel, Gilles; Cavailhès, Jean; Frankhauser, Pierre; Peeters, Dominique; Thomas, Isabelle

    2011-03-01

    We present a model that simulates the growth of a metropolitan area on a 2D lattice. The model is dynamic and based on microeconomics. Households show preferences for nearby open spaces and neighbourhood density. They compete on the land market. They travel along a road network to access the CBD. A planner ensures the connectedness and maintenance of the road network. The spatial pattern of houses, green spaces and road network self-organises, emerging from agents individualistic decisions. We perform several simulations and vary residential preferences. Our results show morphologies and transition phases that are similar to Dieletric Breakdown Models (DBM). Such similarities were observed earlier by other authors, but we show here that it can be deducted from the functioning of the land market and thus explicitly connected to urban economic theory.

  11. Numerical study of similarity in prototype and model pumped turbines

    International Nuclear Information System (INIS)

    Li, Z J; Wang, Z W; Bi, H L

    2014-01-01

    Similarity study of prototype and model pumped turbines are performed by numerical simulation and the partial discharge case is analysed in detail. It is found out that in the RSI (rotor-stator interaction) region where the flow is convectively accelerated with minor flow separation, a high level of similarity in flow patterns and pressure fluctuation appear with relative pressure fluctuation amplitude of model turbine slightly higher than that of prototype turbine. As for the condition in the runner where the flow is convectively accelerated with severe separation, similarity fades substantially due to different topology of flow separation and vortex formation brought by distinctive Reynolds numbers of the two turbines. In the draft tube where the flow is diffusively decelerated, similarity becomes debilitated owing to different vortex rope formation impacted by Reynolds number. It is noted that the pressure fluctuation amplitude and characteristic frequency of model turbine are larger than those of prototype turbine. The differences in pressure fluctuation characteristics are discussed theoretically through dimensionless Navier-Stokes equation. The above conclusions are all made based on simulation without regard to the penstock response and resonance

  12. APPLICABILITY OF SIMILARITY CONDITIONS TO ANALOGUE MODELLING OF TECTONIC STRUCTURES

    Directory of Open Access Journals (Sweden)

    Mikhail A. Goncharov

    2010-01-01

    Full Text Available The publication is aimed at comparing concepts of V.V. Belousov and M.V. Gzovsky, outstanding researchers who established fundamentals of tectonophysics in Russia, specifically similarity conditions in application to tectonophysical modeling. Quotations from their publications illustrate differences in their views. In this respect, we can reckon V.V. Belousov as a «realist» as he supported «the liberal point of view» [Methods of modelling…, 1988, p. 21–22], whereas M.V. Gzovsky can be regarded as an «idealist» as he believed that similarity conditions should be mandatorily applied to ensure correctness of physical modeling of tectonic deformations and structures [Gzovsky, 1975, pp. 88 and 94].Objectives of the present publication are (1 to be another reminder about desirability of compliance with similarity conditions in experimental tectonics; (2 to point out difficulties in ensuring such compliance; (3 to give examples which bring out the fact that similarity conditions are often met per se, i.e. automatically observed; (4 to show that modeling can be simplified in some cases without compromising quantitative estimations of parameters of structure formation.(1 Physical modelling of tectonic deformations and structures should be conducted, if possible, in compliance with conditions of geometric and physical similarity between experimental models and corresponding natural objects. In any case, a researcher should have a clear vision of conditions applicable to each particular experiment.(2 Application of similarity conditions is often challenging due to unavoidable difficulties caused by the following: a Imperfection of experimental equipment and technologies (Fig. 1 to 3; b uncertainties in estimating parameters of formation of natural structures, including main ones: structure size (Fig. 4, time of formation (Fig. 5, deformation properties of the medium wherein such structures are formed, including, first of all, viscosity (Fig. 6

  13. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    Energy Technology Data Exchange (ETDEWEB)

    Ovacik, Meric A. [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Androulakis, Ioannis P., E-mail: yannis@rci.rutgers.edu [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Biomedical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States)

    2013-09-15

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy.

  14. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    International Nuclear Information System (INIS)

    Ovacik, Meric A.; Androulakis, Ioannis P.

    2013-01-01

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy

  15. The continuous similarity model of bulk soil-water evaporation

    Science.gov (United States)

    Clapp, R. B.

    1983-01-01

    The continuous similarity model of evaporation is described. In it, evaporation is conceptualized as a two stage process. For an initially moist soil, evaporation is first climate limited, but later it becomes soil limited. During the latter stage, the evaporation rate is termed evaporability, and mathematically it is inversely proportional to the evaporation deficit. A functional approximation of the moisture distribution within the soil column is also included in the model. The model was tested using data from four experiments conducted near Phoenix, Arizona; and there was excellent agreement between the simulated and observed evaporation. The model also predicted the time of transition to the soil limited stage reasonably well. For one of the experiments, a third stage of evaporation, when vapor diffusion predominates, was observed. The occurrence of this stage was related to the decrease in moisture at the surface of the soil. The continuous similarity model does not account for vapor flow. The results show that climate, through the potential evaporation rate, has a strong influence on the time of transition to the soil limited stage. After this transition, however, bulk evaporation is independent of climate until the effects of vapor flow within the soil predominate.

  16. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-01-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  17. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-12-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  18. Self-similar two-particle separation model

    DEFF Research Database (Denmark)

    Lüthi, Beat; Berg, Jacob; Ott, Søren

    2007-01-01

    .g.; in the inertial range as epsilon−1/3r2/3. Particle separation is modeled as a Gaussian process without invoking information of Eulerian acceleration statistics or of precise shapes of Eulerian velocity distribution functions. The time scale is a function of S2(r) and thus of the Lagrangian evolving separation......We present a new stochastic model for relative two-particle separation in turbulence. Inspired by material line stretching, we suggest that a similar process also occurs beyond the viscous range, with time scaling according to the longitudinal second-order structure function S2(r), e....... The model predictions agree with numerical and experimental results for various initial particle separations. We present model results for fixed time and fixed scale statistics. We find that for the Richardson-Obukhov law, i.e., =gepsilont3, to hold and to also be observed in experiments, high Reynolds...

  19. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    Science.gov (United States)

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine

  20. MAC/FAC: A Model of Similarity-Based Retrieval

    Science.gov (United States)

    1994-10-01

    Grapes (0.28) 327 Sour Grapes, analog The Taming of the Shrew (0.22), Merry Wives 251 (0.18), S[11 stories], Sour Grapes (-0.19) Sour Grapes, literal... The Institute for the 0 1 Learning Sciences Northwestern University CD• 00 MAC/FAC: A MODEL OF SIMILARITY-BASED RETRIEVAL Kenneth D. Forbus Dedre...Gentner Keith Law Technical Report #59 • October 1994 94-35188 wit Establisthed in 1989 with the support of Andersen Consulting Form Approved REPORT

  1. Face Recognition Performance Improvement using a Similarity Score of Feature Vectors based on Probabilistic Histograms

    Directory of Open Access Journals (Sweden)

    SRIKOTE, G.

    2016-08-01

    Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.

  2. Vere-Jones' self-similar branching model

    International Nuclear Information System (INIS)

    Saichev, A.; Sornette, D.

    2005-01-01

    Motivated by its potential application to earthquake statistics as well as for its intrinsic interest in the theory of branching processes, we study the exactly self-similar branching process introduced recently by Vere-Jones. This model extends the ETAS class of conditional self-excited branching point-processes of triggered seismicity by removing the problematic need for a minimum (as well as maximum) earthquake size. To make the theory convergent without the need for the usual ultraviolet and infrared cutoffs, the distribution of magnitudes m ' of daughters of first-generation of a mother of magnitude m has two branches m ' ' >m with exponent β+d, where β and d are two positive parameters. We investigate the condition and nature of the subcritical, critical, and supercritical regime in this and in an extended version interpolating smoothly between several models. We predict that the distribution of magnitudes of events triggered by a mother of magnitude m over all generations has also two branches m ' ' >m with exponent β+h, with h=d√(1-s), where s is the fraction of triggered events. This corresponds to a renormalization of the exponent d into h by the hierarchy of successive generations of triggered events. For a significant part of the parameter space, the distribution of magnitudes over a full catalog summed over an average steady flow of spontaneous sources (immigrants) reproduces the distribution of the spontaneous sources with a single branch and is blind to the exponents β,d of the distribution of triggered events. Since the distribution of earthquake magnitudes is usually obtained with catalogs including many sequences, we conclude that the two branches of the distribution of aftershocks are not directly observable and the model is compatible with real seismic catalogs. In summary, the exactly self-similar Vere-Jones model provides an attractive new approach to model triggered seismicity, which alleviates delicate questions on the role of

  3. A self-similar magnetohydrodynamic model for ball lightnings

    International Nuclear Information System (INIS)

    Tsui, K. H.

    2006-01-01

    Ball lightning is modeled by magnetohydrodynamic (MHD) equations in two-dimensional spherical geometry with azimuthal symmetry. Dynamic evolutions in the radial direction are described by the self-similar evolution function y(t). The plasma pressure, mass density, and magnetic fields are solved in terms of the radial label η. This model gives spherical MHD plasmoids with axisymmetric force-free magnetic field, and spherically symmetric plasma pressure and mass density, which self-consistently determine the polytropic index γ. The spatially oscillating nature of the radial and meridional field structures indicate embedded regions of closed field lines. These regions are named secondary plasmoids, whereas the overall self-similar spherical structure is named the primary plasmoid. According to this model, the time evolution function allows the primary plasmoid expand outward in two modes. The corresponding ejection of the embedded secondary plasmoids results in ball lightning offering an answer as how they come into being. The first is an accelerated expanding mode. This mode appears to fit plasmoids ejected from thundercloud tops with acceleration to ionosphere seen in high altitude atmospheric observations of sprites and blue jets. It also appears to account for midair high-speed ball lightning overtaking airplanes, and ground level high-speed energetic ball lightning. The second is a decelerated expanding mode, and it appears to be compatible to slowly moving ball lightning seen near ground level. The inverse of this second mode corresponds to an accelerated inward collapse, which could bring ball lightning to an end sometimes with a cracking sound

  4. Automated Student Model Improvement

    Science.gov (United States)

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  5. Towards Modelling Variation in Music as Foundation for Similarity

    NARCIS (Netherlands)

    Volk, A.; de Haas, W.B.; van Kranenburg, P.; Cambouropoulos, E.; Tsougras, C.; Mavromatis, P.; Pastiadis, K.

    2012-01-01

    This paper investigates the concept of variation in music from the perspective of music similarity. Music similarity is a central concept in Music Information Retrieval (MIR), however there exists no comprehensive approach to music similarity yet. As a consequence, MIR faces the challenge on how to

  6. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  7. QSAR models based on quantum topological molecular similarity.

    Science.gov (United States)

    Popelier, P L A; Smith, P J

    2006-07-01

    A new method called quantum topological molecular similarity (QTMS) was fairly recently proposed [J. Chem. Inf. Comp. Sc., 41, 2001, 764] to construct a variety of medicinal, ecological and physical organic QSAR/QSPRs. QTMS method uses quantum chemical topology (QCT) to define electronic descriptors drawn from modern ab initio wave functions of geometry-optimised molecules. It was shown that the current abundance of computing power can be utilised to inject realistic descriptors into QSAR/QSPRs. In this article we study seven datasets of medicinal interest : the dissociation constants (pK(a)) for a set of substituted imidazolines , the pK(a) of imidazoles , the ability of a set of indole derivatives to displace [(3)H] flunitrazepam from binding to bovine cortical membranes , the influenza inhibition constants for a set of benzimidazoles , the interaction constants for a set of amides and the enzyme liver alcohol dehydrogenase , the natriuretic activity of sulphonamide carbonic anhydrase inhibitors and the toxicity of a series of benzyl alcohols. A partial least square analysis in conjunction with a genetic algorithm delivered excellent models. They are also able to highlight the active site, of the ligand or the molecule whose structure determines the activity. The advantages and limitations of QTMS are discussed.

  8. Visual reconciliation of alternative similarity spaces in climate modeling

    Science.gov (United States)

    J Poco; A Dasgupta; Y Wei; William Hargrove; C.R. Schwalm; D.N. Huntzinger; R Cook; E Bertini; C.T. Silva

    2015-01-01

    Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses...

  9. Self-similar solution for coupled thermal electromagnetic model ...

    African Journals Online (AJOL)

    An investigation into the existence and uniqueness solution of self-similar solution for the coupled Maxwell and Pennes Bio-heat equations have been done. Criteria for existence and uniqueness of self-similar solution are revealed in the consequent theorems. Journal of the Nigerian Association of Mathematical Physics ...

  10. A Model-Based Approach to Constructing Music Similarity Functions

    Science.gov (United States)

    West, Kris; Lamere, Paul

    2006-12-01

    Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  11. A Model-Based Approach to Constructing Music Similarity Functions

    Directory of Open Access Journals (Sweden)

    Lamere Paul

    2007-01-01

    Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  12. Modeling Recognition Memory Using the Similarity Structure of Natural Input

    Science.gov (United States)

    Lacroix, Joyca P. W.; Murre, Jaap M. J.; Postma, Eric O.; van den Herik, H. Jaap

    2006-01-01

    The natural input memory (NAM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During recognition, the model compares incoming preprocessed…

  13. Improved steamflood analytical model

    Energy Technology Data Exchange (ETDEWEB)

    Chandra, S.; Mamora, D.D. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Texas A and M Univ., TX (United States)

    2005-11-01

    Predicting the performance of steam flooding can help in the proper execution of enhanced oil recovery (EOR) processes. The Jones model is often used for analytical steam flooding performance prediction, but it does not accurately predict oil production peaks. In this study, an improved steam flood model was developed by modifying 2 of the 3 components of the capture factor in the Jones model. The modifications were based on simulation results from a Society of Petroleum Engineers (SPE) comparative project case model. The production performance of a 5-spot steamflood pattern unit was simulated and compared with results obtained from the Jones model. Three reservoir types were simulated through the use of 3-D Cartesian black oil models. In order to correlate the simulation and the Jones analytical model results for the start and height of the production peak, the dimensionless steam zone size was modified to account for a decrease in oil viscosity during steam flooding and its dependence on the steam injection rate. In addition, the dimensionless volume of displaced oil produced was modified from its square-root format to an exponential form. The modified model improved results for production performance by up to 20 years of simulated steam flooding, compared to the Jones model. Results agreed with simulation results for 13 different cases, including 3 different sets of reservoir and fluid properties. Reservoir engineers will benefit from the improved accuracy of the model. Oil displacement calculations were based on methods proposed in earlier research, in which the oil displacement rate is a function of cumulative oil steam ratio. The cumulative oil steam ratio is a function of overall thermal efficiency. Capture factor component formulae were presented, as well as charts of oil production rates and cumulative oil-steam ratios for various reservoirs. 13 refs., 4 tabs., 29 figs.

  14. The predictive model on the user reaction time using the information similarity

    International Nuclear Information System (INIS)

    Lee, Sung Jin; Heo, Gyun Young; Chang, Soon Heung

    2005-01-01

    Human performance is frequently degraded because people forget. Memory is one of brain processes that are important when trying to understand how people process information. Although a large number of studies have been made on the human performance, little is known about the similarity effect in human performance. The purpose of this paper is to propose and validate the quantitative and predictive model on the human response time in the user interface with the concept of similarity. However, it is not easy to explain the human performance with only similarity or information amount. We are confronted by two difficulties: making the quantitative model on the human response time with the similarity and validating the proposed model by experimental work. We made the quantitative model based on the Hick's law and the law of practice. In addition, we validated the model with various experimental conditions by measuring participants' response time in the environment of computer-based display. Experimental results reveal that the human performance is improved by the user interface's similarity. We think that the proposed model is useful for the user interface design and evaluation phases

  15. Modeling recognition memory using the similarity structure of natural input

    NARCIS (Netherlands)

    Lacroix, J.P.W.; Murre, J.M.J.; Postma, E.O.; van den Herik, H.J.

    2006-01-01

    The natural input memory (NIM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During

  16. Approximate self-similarity in models of geological folding

    NARCIS (Netherlands)

    Budd, C.J.; Peletier, M.A.

    2000-01-01

    We propose a model for the folding of rock under the compression of tectonic plates. This models an elastic rock layer imbedded in a viscous foundation by a fourth-order parabolic equation with a nonlinear constraint. The large-time behavior of solutions of this problem is examined and found to be

  17. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  18. Similarity solutions for systems arising from an Aedes aegypti model

    Science.gov (United States)

    Freire, Igor Leite; Torrisi, Mariano

    2014-04-01

    In a recent paper a new model for the Aedes aegypti mosquito dispersal dynamics was proposed and its Lie point symmetries were investigated. According to the carried group classification, the maximal symmetry Lie algebra of the nonlinear cases is reached whenever the advection term vanishes. In this work we analyze the family of systems obtained when the wind effects on the proposed model are neglected. Wide new classes of solutions to the systems under consideration are obtained.

  19. Predictive analytics technology review: Similarity-based modeling and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, James; Doan, Don; Gandhi, Devang; Nieman, Bill

    2010-09-15

    Over 11 years ago, SmartSignal introduced Predictive Analytics for eliminating equipment failures, using its patented SBM technology. SmartSignal continues to lead and dominate the market and, in 2010, went one step further and introduced Predictive Diagnostics. Now, SmartSignal is combining Predictive Diagnostics with RCM methodology and industry expertise. FMEA logic reengineers maintenance work management, eliminates unneeded inspections, and focuses efforts on the real issues. This integrated solution significantly lowers maintenance costs, protects against critical asset failures, and improves commercial availability, and reduces work orders 20-40%. Learn how.

  20. Similarity and accuracy of mental models formed during nursing handovers: A concept mapping approach.

    Science.gov (United States)

    Drach-Zahavy, Anat; Broyer, Chaya; Dagan, Efrat

    2017-09-01

    Shared mental models are crucial for constructing mutual understanding of the patient's condition during a clinical handover. Yet, scant research, if any, has empirically explored mental models of the parties involved in a clinical handover. This study aimed to examine the similarities among mental models of incoming and outgoing nurses, and to test their accuracy by comparing them with mental models of expert nurses. A cross-sectional study, exploring nurses' mental models via the concept mapping technique. 40 clinical handovers. Data were collected via concept mapping of the incoming, outgoing, and expert nurses' mental models (total of 120 concept maps). Similarity and accuracy for concepts and associations indexes were calculated to compare the different maps. About one fifth of the concepts emerged in both outgoing and incoming nurses' concept maps (concept similarity=23%±10.6). Concept accuracy indexes were 35%±18.8 for incoming and 62%±19.6 for outgoing nurses' maps. Although incoming nurses absorbed fewer number of concepts and associations (23% and 12%, respectively), they partially closed the gap (35% and 22%, respectively) relative to expert nurses' maps. The correlations between concept similarities, and incoming as well as outgoing nurses' concept accuracy, were significant (r=0.43, p<0.01; r=0.68 p<0.01, respectively). Finally, in 90% of the maps, outgoing nurses added information concerning the processes enacted during the shift, beyond the expert nurses' gold standard. Two seemingly contradicting processes in the handover were identified. "Information loss", captured by the low similarity indexes among the mental models of incoming and outgoing nurses; and "information restoration", based on accuracy measures indexes among the mental models of the incoming nurses. Based on mental model theory, we propose possible explanations for these processes and derive implications for how to improve a clinical handover. Copyright © 2017 Elsevier Ltd. All

  1. Similarities between obesity in pets and children: the addiction model.

    Science.gov (United States)

    Pretlow, Robert A; Corbee, Ronald J

    2016-09-01

    Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest short-term but poor long-term results. New strategies are needed. A novel theory posits that obesity in pets and children is due to 'treats' and excessive meal amounts given by the 'pet-parent' and child-parent to obtain affection from the pet/child, which enables 'eating addiction' in the pet/child and results in parental 'co-dependence'. Pet-parents and child-parents may even become hostage to the treats/food to avoid the ire of the pet/child. Eating addiction in the pet/child also may be brought about by emotional factors such as stress, independent of parental co-dependence. An applicable treatment for child obesity has been trialled using classic addiction withdrawal/abstinence techniques, as well as behavioural addiction methods, with significant results. Both the child and the parent progress through withdrawal from specific 'problem foods', next from snacking (non-specific foods) and finally from excessive portions at meals (gradual reductions). This approach should adapt well for pets and pet-parents. Pet obesity is more 'pure' than child obesity, in that contributing factors and treatment points are essentially under the control of the pet-parent. Pet obesity might thus serve as an ideal test bed for the treatment and prevention of child obesity, with focus primarily on parental behaviours. Sharing information between the fields of pet and child obesity would be mutually beneficial.

  2. A little similarity goes a long way: the effects of peripheral but self-revealing similarities on improving and sustaining interracial relationships.

    Science.gov (United States)

    West, Tessa V; Magee, Joe C; Gordon, Sarah H; Gullett, Lindy

    2014-07-01

    Integrating theory on close relationships and intergroup relations, we construct a manipulation of similarity that we demonstrate can improve interracial interactions across different settings. We find that manipulating perceptions of similarity on self-revealing attributes that are peripheral to the interaction improves interactions in cross-race dyads and racially diverse task groups. In a getting-acquainted context, we demonstrate that the belief that one's different-race partner is similar to oneself on self-revealing, peripheral attributes leads to less anticipatory anxiety than the belief that one's partner is similar on peripheral, nonself-revealing attributes. In another dyadic context, we explore the range of benefits that perceptions of peripheral, self-revealing similarity can bring to different-race interaction partners and find (a) less anxiety during interaction, (b) greater interest in sustained contact with one's partner, and (c) stronger accuracy in perceptions of one's partners' relationship intentions. By contrast, participants in same-race interactions were largely unaffected by these manipulations of perceived similarity. Our final experiment shows that among small task groups composed of racially diverse individuals, those whose members perceive peripheral, self-revealing similarity perform superior to those who perceive dissimilarity. Implications for using this approach to improve interracial interactions across different goal-driven contexts are discussed.

  3. Lower- Versus Higher-Income Populations In The Alternative Quality Contract: Improved Quality And Similar Spending.

    Science.gov (United States)

    Song, Zirui; Rose, Sherri; Chernew, Michael E; Safran, Dana Gelb

    2017-01-01

    As population-based payment models become increasingly common, it is crucial to understand how such payment models affect health disparities. We evaluated health care quality and spending among enrollees in areas with lower versus higher socioeconomic status in Massachusetts before and after providers entered into the Alternative Quality Contract, a two-sided population-based payment model with substantial incentives tied to quality. We compared changes in process measures, outcome measures, and spending between enrollees in areas with lower and higher socioeconomic status from 2006 to 2012 (outcome measures were measured after the intervention only). Quality improved for all enrollees in the Alternative Quality Contract after their provider organizations entered the contract. Process measures improved 1.2 percentage points per year more among enrollees in areas with lower socioeconomic status than among those in areas with higher socioeconomic status. Outcome measure improvement was no different between the subgroups; neither were changes in spending. Larger or comparable improvements in quality among enrollees in areas with lower socioeconomic status suggest a potential narrowing of disparities. Strong pay-for-performance incentives within a population-based payment model could encourage providers to focus on improving quality for more disadvantaged populations. Project HOPE—The People-to-People Health Foundation, Inc.

  4. Improving Zernike moments comparison for optimal similarity and rotation angle retrieval.

    Science.gov (United States)

    Revaud, Jérôme; Lavoué, Guillaume; Baskurt, Atilla

    2009-04-01

    Zernike moments constitute a powerful shape descriptor in terms of robustness and description capability. However the classical way of comparing two Zernike descriptors only takes into account the magnitude of the moments and loses the phase information. The novelty of our approach is to take advantage of the phase information in the comparison process while still preserving the invariance to rotation. This new Zernike comparator provides a more accurate similarity measure together with the optimal rotation angle between the patterns, while keeping the same complexity as the classical approach. This angle information is particularly of interest for many applications, including 3D scene understanding through images. Experiments demonstrate that our comparator outperforms the classical one in terms of similarity measure. In particular the robustness of the retrieval against noise and geometric deformation is greatly improved. Moreover, the rotation angle estimation is also more accurate than state-of-the-art algorithms.

  5. Improvement of training set structure in fusion data cleaning using Time-Domain Global Similarity method

    International Nuclear Information System (INIS)

    Liu, J.; Lan, T.; Qin, H.

    2017-01-01

    Traditional data cleaning identifies dirty data by classifying original data sequences, which is a class-imbalanced problem since the proportion of incorrect data is much less than the proportion of correct ones for most diagnostic systems in Magnetic Confinement Fusion (MCF) devices. When using machine learning algorithms to classify diagnostic data based on class-imbalanced training set, most classifiers are biased towards the major class and show very poor classification rates on the minor class. By transforming the direct classification problem about original data sequences into a classification problem about the physical similarity between data sequences, the class-balanced effect of Time-Domain Global Similarity (TDGS) method on training set structure is investigated in this paper. Meanwhile, the impact of improved training set structure on data cleaning performance of TDGS method is demonstrated with an application example in EAST POlarimetry-INTerferometry (POINT) system.

  6. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    Science.gov (United States)

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    data, wavelet transform and term frequency-inverse document frequency methods were employed to extract predictors. Selecting predictors with potential to highlight special cases and defining new patient similarity metrics were among the gaps identified in the existing literature that provide starting points for future work. Patient status prediction models based on patient similarity and health data offer exciting potential for personalizing and ultimately improving health care, leading to better patient outcomes. PMID:28258046

  7. An improved DPSO with mutation based on similarity algorithm for optimization of transmission lines loading

    International Nuclear Information System (INIS)

    Shayeghi, H.; Mahdavi, M.; Bagheri, A.

    2010-01-01

    Static transmission network expansion planning (STNEP) problem acquires a principal role in power system planning and should be evaluated carefully. Up till now, various methods have been presented to solve the STNEP problem. But only in one of them, lines adequacy rate has been considered at the end of planning horizon and the problem has been optimized by discrete particle swarm optimization (DPSO). DPSO is a new population-based intelligence algorithm and exhibits good performance on solution of the large-scale, discrete and non-linear optimization problems like STNEP. However, during the running of the algorithm, the particles become more and more similar, and cluster into the best particle in the swarm, which make the swarm premature convergence around the local solution. In order to overcome these drawbacks and considering lines adequacy rate, in this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using an improved DPSO algorithm. The proposed improved DPSO is a new conception, collectivity, which is based on similarity between the particle and the current global best particle in the swarm that can prevent the premature convergence of DPSO around the local solution. The proposed method has been tested on the Garver's network and a real transmission network in Iran, and compared with the DPSO based method for solution of the TNEP problem. The results show that the proposed improved DPSO based method by preventing the premature convergence is caused that with almost the same expansion costs, the network adequacy is increased considerably. Also, regarding the convergence curves of both methods, it can be seen that precision of the proposed algorithm for the solution of the STNEP problem is more than DPSO approach.

  8. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure

    Directory of Open Access Journals (Sweden)

    Wen Zhang

    2016-01-01

    Full Text Available Recently, LSI (Latent Semantic Indexing based on SVD (Singular Value Decomposition is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods.

  9. Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor

    Directory of Open Access Journals (Sweden)

    Ye Li

    2017-01-01

    Full Text Available Recommended system is beneficial to e-commerce sites, which provides customers with product information and recommendations; the recommendation system is currently widely used in many fields. In an era of information explosion, the key challenges of the recommender system is to obtain valid information from the tremendous amount of information and produce high quality recommendations. However, when facing the large mount of information, the traditional collaborative filtering algorithm usually obtains a high degree of sparseness, which ultimately lead to low accuracy recommendations. To tackle this issue, we propose a novel algorithm named Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor, which is based on the trust model and is combined with the user similarity. The novel algorithm takes into account the degree of interest overlap between the two users and results in a superior performance to the recommendation based on Trust Model in criteria of Precision, Recall, Diversity and Coverage. Additionally, the proposed model can effectively improve the efficiency of collaborative filtering algorithm and achieve high performance.

  10. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert; Gruenberger, Michael; Gkoutos, Georgios V; Schofield, Paul N

    2015-01-01

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions

  11. Embedding Term Similarity and Inverse Document Frequency into a Logical Model of Information Retrieval.

    Science.gov (United States)

    Losada, David E.; Barreiro, Alvaro

    2003-01-01

    Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

  12. Similarity Assessment of Land Surface Model Outputs in the North American Land Data Assimilation System

    Science.gov (United States)

    Kumar, Sujay V.; Wang, Shugong; Mocko, David M.; Peters-Lidard, Christa D.; Xia, Youlong

    2017-11-01

    Multimodel ensembles are often used to produce ensemble mean estimates that tend to have increased simulation skill over any individual model output. If multimodel outputs are too similar, an individual LSM would add little additional information to the multimodel ensemble, whereas if the models are too dissimilar, it may be indicative of systematic errors in their formulations or configurations. The article presents a formal similarity assessment of the North American Land Data Assimilation System (NLDAS) multimodel ensemble outputs to assess their utility to the ensemble, using a confirmatory factor analysis. Outputs from four NLDAS Phase 2 models currently running in operations at NOAA/NCEP and four new/upgraded models that are under consideration for the next phase of NLDAS are employed in this study. The results show that the runoff estimates from the LSMs were most dissimilar whereas the models showed greater similarity for root zone soil moisture, snow water equivalent, and terrestrial water storage. Generally, the NLDAS operational models showed weaker association with the common factor of the ensemble and the newer versions of the LSMs showed stronger association with the common factor, with the model similarity increasing at longer time scales. Trade-offs between the similarity metrics and accuracy measures indicated that the NLDAS operational models demonstrate a larger span in the similarity-accuracy space compared to the new LSMs. The results of the article indicate that simultaneous consideration of model similarity and accuracy at the relevant time scales is necessary in the development of multimodel ensemble.

  13. A new k-epsilon model consistent with Monin-Obukhov similarity theory

    DEFF Research Database (Denmark)

    van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.

    2017-01-01

    A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...

  14. Attitude Similarity and Therapist Credibility as Predictors of Attitude Change and Improvement in Psychotherapy

    Science.gov (United States)

    Beutler, Larry E.; And Others

    1975-01-01

    This study attempts to (1) assess the effects of therapist credibility and patient-therapist similarity on interpersonal persuasion; and (2) to further assess the relationship between patient attitude change and psychotherapy outcome. (HMV)

  15. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    Science.gov (United States)

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  16. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert

    2015-02-19

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  17. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  18. Effect of similar elements on improving glass-forming ability of La-Ce-based alloys

    International Nuclear Information System (INIS)

    Zhang Tao; Li Ran; Pang Shujie

    2009-01-01

    To date the effect of unlike component elements on glass-forming ability (GFA) of alloys have been studied extensively, and it is generally recognized that the main consisting elements of the alloys with high GFA usually have large difference in atomic size and atomic interaction (large negative heat of mixing) among them. In our recent work, a series of rare earth metal-based alloy compositions with superior GFA were found through the approach of coexistence of similar constituent elements. The quinary (La 0.5 Ce 0.5 ) 65 Al 10 (Co 0.6 Cu 0.4 ) 25 bulk metallic glass (BMG) in a rod form with a diameter up to 32 mm was synthesized by tilt-pour casting, for which the glass-forming ability is significantly higher than that for ternary Ln-Al-TM alloys (Ln = La or Ce; TM = Co or Cu) with critical diameters for glass-formation of several millimeters. We suggest that the strong frustration of crystallization by utilizing the coexistence of La-Ce and Co-Cu to complicate competing crystalline phases is helpful to construct BMG component with superior GFA. The results of our present work indicate that similar elements (elements with similar atomic size and chemical properties) have significant effect on GFA of alloys.

  19. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    Science.gov (United States)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  20. A study on the quantitative model of human response time using the amount and the similarity of information

    International Nuclear Information System (INIS)

    Lee, Sung Jin

    2006-02-01

    The mental capacity to retain or recall information, or memory is related to human performance during processing of information. Although a large number of studies have been carried out on human performance, little is known about the similarity effect. The purpose of this study was to propose and validate a quantitative and predictive model on human response time in the user interface with the basic concepts of information amount, similarity and degree of practice. It was difficult to explain human performance by only similarity or information amount. There were two difficulties: constructing a quantitative model on human response time and validating the proposed model by experimental work. A quantitative model based on the Hick's law, the law of practice and similarity theory was developed. The model was validated under various experimental conditions by measuring the participants' response time in the environment of a computer-based display. Human performance was improved by degree of similarity and practice in the user interface. Also we found the age-related human performance which was degraded as he or she was more elder. The proposed model may be useful for training operators who will handle some interfaces and predicting human performance by changing system design

  1. Endocrinology Telehealth Consultation Improved Glycemic Control Similar to Face-to-Face Visits in Veterans.

    Science.gov (United States)

    Liu, Winnie; Saxon, David R; McNair, Bryan; Sanagorski, Rebecca; Rasouli, Neda

    2016-09-01

    Rates of diabetes for veterans who receive health care through the Veterans Health Administration are higher than rates in the general population. Furthermore, many veterans live in rural locations, far from Veterans Affairs (VA) hospitals, thus limiting their ability to readily seek face-to-face endocrinology care for diabetes. Telehealth (TH) technologies present an opportunity to improve access to specialty diabetes care for such patients; however, there is a lack of evidence regarding the ability of TH to improve glycemic control in comparison to traditional face-to-face consultations. This was a retrospective cohort study of all new endocrinology diabetes consultations at the Denver VA Medical Center over a 1-year period. A total of 189 patients were included in the analysis. In all, 85 patients had received face-to-face (FTF) endocrinology consultation for diabetes and 104 patients had received TH consultation. Subjects were mostly males (94.7%) and the mean age was 62.8 ± 10.1 years old. HbA1c improved from 9.76% (9.40% to 10.11%) to 8.55% (8.20% to 8.91%) (P Endocrinology TH consultations improved short-term glycemic control as effectively as traditional FTF visits in a veteran population with diabetes. © 2016 Diabetes Technology Society.

  2. Modeling of scroll compressors - Improvements

    Energy Technology Data Exchange (ETDEWEB)

    Duprez, Marie-Eve; Dumont, Eric; Frere, Marc [Thermodynamics Department, Universite de Mons - Faculte Polytechnique, 31 bd Dolez, 7000 Mons (Belgium)

    2010-06-15

    This paper presents an improvement of the scroll compressors model previously published by. This improved model allows the calculation of refrigerant mass flow rate, power consumption and heat flow rate that would be released at the condenser of a heat pump equipped with the compressor, from the knowledge of operating conditions and parameters. Both basic and improved models have been tested on scroll compressors using different refrigerants. This study has been limited to compressors with a maximum electrical power of 14 kW and for evaporation temperatures ranging from -40 to 15 C and condensation temperatures from 10 to 75 C. The average discrepancies on mass flow rate, power consumption and heat flow rate are respectively 0.50%, 0.93% and 3.49%. Using a global parameter determination (based on several refrigerants data), this model can predict the behavior of a compressor with another fluid for which no manufacturer data are available. (author)

  3. Spatiao – Temporal Evaluation and Comparison of MM5 Model using Similarity Algorithm

    Directory of Open Access Journals (Sweden)

    N. Siabi

    2016-02-01

    Full Text Available Introduction temporal and spatial change of meteorological and environmental variables is very important. These changes can be predicted by numerical prediction models over time and in different locations and can be provided as spatial zoning maps with interpolation methods such as geostatistics (16, 6. But these maps are comparable to each other as visual, qualitative and univariate for a limited number of maps (15. To resolve this problem the similarity algorithm is used. This algorithm is a simultaneous comparison method to a large number of data (18. Numerical prediction models such as MM5 were used in different studies (10, 22, and 23. But a little research is done to compare the spatio-temporal similarity of the models with real data quantitatively. The purpose of this paper is to integrate geostatistical techniques with similarity algorithm to study the spatial and temporal MM5 model predicted results with real data. Materials and Methods The study area is north east of Iran. 55 to 61 degrees of longitude and latitude is 30 to 38 degrees. Monthly and annual temperature and precipitation actual data for the period of 1990-2010 was received from the Meteorological Agency and Department of Energy. MM5 Model Data, with a spatial resolution 0.5 × 0.5 degree were downloaded from the NASA website (5. GS+ and ArcGis software were used to produce each variable map. We used multivariate methods co-kriging and kriging with an external drift by applying topography and height as a secondary variable via implementing Digital Elevation Model. (6,12,14. Then the standardize and similarity algorithms (9,11 was applied by programming in MATLAB software to each map grid point. The spatial and temporal similarities between data collections and model results were obtained by F values. These values are between 0 and 0.5 where the value below 0.2 indicates good similarity and above 0.5 shows very poor similarity. The results were plotted on maps by MATLAB

  4. Models for discrete-time self-similar vector processes with application to network traffic

    Science.gov (United States)

    Lee, Seungsin; Rao, Raghuveer M.; Narasimha, Rajesh

    2003-07-01

    The paper defines self-similarity for vector processes by employing the discrete-time continuous-dilation operation which has successfully been used previously by the authors to define 1-D discrete-time stochastic self-similar processes. To define self-similarity of vector processes, it is required to consider the cross-correlation functions between different 1-D processes as well as the autocorrelation function of each constituent 1-D process in it. System models to synthesize self-similar vector processes are constructed based on the definition. With these systems, it is possible to generate self-similar vector processes from white noise inputs. An important aspect of the proposed models is that they can be used to synthesize various types of self-similar vector processes by choosing proper parameters. Additionally, the paper presents evidence of vector self-similarity in two-channel wireless LAN data and applies the aforementioned systems to simulate the corresponding network traffic traces.

  5. Analysis of metal forming processes by using physical modeling and new plastic similarity condition

    International Nuclear Information System (INIS)

    Gronostajski, Z.; Hawryluk, M.

    2007-01-01

    In recent years many advances have been made in numerical methods, for linear and non-linear problems. However the success of them depends very much on the correctness of the problem formulation and the availability of the input data. Validity of the theoretical results can be verified by an experiment using the real or soft materials. An essential reduction of time and costs of the experiment can be obtained by using soft materials, which behaves in a way analogous to that of real metal during deformation. The advantages of using of the soft materials are closely connected with flow stress 500 to 1000 times lower than real materials. The accuracy of physical modeling depend on the similarity conditions between physical model and real process. The most important similarity conditions are materials similarity in the range of plastic and elastic deformation, geometrical, frictional and thermal similarities. New original plastic similarity condition for physical modeling of metal forming processes is proposed in the paper. It bases on the mathematical description of similarity of the flow stress curves of soft materials and real ones

  6. Quality assessment of protein model-structures based on structural and functional similarities.

    Science.gov (United States)

    Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata

    2012-09-21

    Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and

  7. LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation.

    Science.gov (United States)

    Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl

    2015-08-01

    Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average

  8. From epidemics to information propagation: Striking differences in structurally similar adaptive network models

    Science.gov (United States)

    Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet

    2015-09-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.

  9. Genome-Wide Expression Profiling of Five Mouse Models Identifies Similarities and Differences with Human Psoriasis

    Science.gov (United States)

    Swindell, William R.; Johnston, Andrew; Carbajal, Steve; Han, Gangwen; Wohn, Christian; Lu, Jun; Xing, Xianying; Nair, Rajan P.; Voorhees, John J.; Elder, James T.; Wang, Xiao-Jing; Sano, Shigetoshi; Prens, Errol P.; DiGiovanni, John; Pittelkow, Mark R.; Ward, Nicole L.; Gudjonsson, Johann E.

    2011-01-01

    Development of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features of the human disease, and standardized validation criteria for psoriasis mouse models have not been widely applied. In this study, whole-genome transcriptional profiling is used to compare gene expression patterns manifested by human psoriatic skin lesions with those that occur in five psoriasis mouse models (K5-Tie2, imiquimod, K14-AREG, K5-Stat3C and K5-TGFbeta1). While the cutaneous gene expression profiles associated with each mouse phenotype exhibited statistically significant similarity to the expression profile of psoriasis in humans, each model displayed distinctive sets of similarities and differences in comparison to human psoriasis. For all five models, correspondence to the human disease was strong with respect to genes involved in epidermal development and keratinization. Immune and inflammation-associated gene expression, in contrast, was more variable between models as compared to the human disease. These findings support the value of all five models as research tools, each with identifiable areas of convergence to and divergence from the human disease. Additionally, the approach used in this paper provides an objective and quantitative method for evaluation of proposed mouse models of psoriasis, which can be strategically applied in future studies to score strengths of mouse phenotypes relative to specific aspects of human psoriasis. PMID:21483750

  10. Distance and Density Similarity Based Enhanced k-NN Classifier for Improving Fault Diagnosis Performance of Bearings

    Directory of Open Access Journals (Sweden)

    Sharif Uddin

    2016-01-01

    Full Text Available An enhanced k-nearest neighbor (k-NN classification algorithm is presented, which uses a density based similarity measure in addition to a distance based similarity measure to improve the diagnostic performance in bearing fault diagnosis. Due to its use of distance based similarity measure alone, the classification accuracy of traditional k-NN deteriorates in case of overlapping samples and outliers and is highly susceptible to the neighborhood size, k. This study addresses these limitations by proposing the use of both distance and density based measures of similarity between training and test samples. The proposed k-NN classifier is used to enhance the diagnostic performance of a bearing fault diagnosis scheme, which classifies different fault conditions based upon hybrid feature vectors extracted from acoustic emission (AE signals. Experimental results demonstrate that the proposed scheme, which uses the enhanced k-NN classifier, yields better diagnostic performance and is more robust to variations in the neighborhood size, k.

  11. Experimental Study of Dowel Bar Alternatives Based on Similarity Model Test

    Directory of Open Access Journals (Sweden)

    Chichun Hu

    2017-01-01

    Full Text Available In this study, a small-scaled accelerated loading test based on similarity theory and Accelerated Pavement Analyzer was developed to evaluate dowel bars with different materials and cross-sections. Jointed concrete specimen consisting of one dowel was designed as scaled model for the test, and each specimen was subjected to 864 thousand loading cycles. Deflections between jointed slabs were measured with dial indicators, and strains of the dowel bars were monitored with strain gauges. The load transfer efficiency, differential deflection, and dowel-concrete bearing stress for each case were calculated from these measurements. The test results indicated that the effect of the dowel modulus on load transfer efficiency can be characterized based on the similarity model test developed in the study. Moreover, round steel dowel was found to have similar performance to larger FRP dowel, and elliptical dowel can be preferentially considered in practice.

  12. Improving practical atmospheric dispersion models

    International Nuclear Information System (INIS)

    Hunt, J.C.R.; Hudson, B.; Thomson, D.J.

    1992-01-01

    The new generation of practical atmospheric dispersion model (for short range ≤ 30 km) are based on dispersion science and boundary layer meteorology which have widespread international acceptance. In addition, recent improvements in computer skills and the widespread availability of small powerful computers make it possible to have new regulatory models which are more complex than the previous generation which were based on charts and simple formulae. This paper describes the basis of these models and how they have developed. Such models are needed to satisfy the urgent public demand for sound, justifiable and consistent environmental decisions. For example, it is preferable that the same models are used to simulate dispersion in different industries; in many countries at present different models are used for emissions from nuclear and fossil fuel power stations. The models should not be so simple as to be suspect but neither should they be too complex for widespread use; for example, at public inquiries in Germany, where simple models are mandatory, it is becoming usual to cite the results from highly complex computational models because the simple models are not credible. This paper is written in a schematic style with an emphasis on tables and diagrams. (au) (22 refs.)

  13. Bianchi VI{sub 0} and III models: self-similar approach

    Energy Technology Data Exchange (ETDEWEB)

    Belinchon, Jose Antonio, E-mail: abelcal@ciccp.e [Departamento de Fisica, ETS Arquitectura, UPM, Av. Juan de Herrera 4, Madrid 28040 (Spain)

    2009-09-07

    We study several cosmological models with Bianchi VI{sub 0} and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and LAMBDA. As in other studied models we find that the behaviour of G and LAMBDA are related. If G behaves as a growing time function then LAMBDA is a positive decreasing time function but if G is decreasing then LAMBDA{sub 0} is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  14. Environmental niche models for riverine desert fishes and their similarity according to phylogeny and functionality

    Science.gov (United States)

    Whitney, James E.; Whittier, Joanna B.; Paukert, Craig P.

    2017-01-01

    Environmental filtering and competitive exclusion are hypotheses frequently invoked in explaining species' environmental niches (i.e., geographic distributions). A key assumption in both hypotheses is that the functional niche (i.e., species traits) governs the environmental niche, but few studies have rigorously evaluated this assumption. Furthermore, phylogeny could be associated with these hypotheses if it is predictive of functional niche similarity via phylogenetic signal or convergent evolution, or of environmental niche similarity through phylogenetic attraction or repulsion. The objectives of this study were to investigate relationships between environmental niches, functional niches, and phylogenies of fishes of the Upper (UCRB) and Lower (LCRB) Colorado River Basins of southwestern North America. We predicted that functionally similar species would have similar environmental niches (i.e., environmental filtering) and that closely related species would be functionally similar (i.e., phylogenetic signal) and possess similar environmental niches (i.e., phylogenetic attraction). Environmental niches were quantified using environmental niche modeling, and functional similarity was determined using functional trait data. Nonnatives in the UCRB provided the only support for environmental filtering, which resulted from several warmwater nonnatives having dam number as a common predictor of their distributions, whereas several cool- and coldwater nonnatives shared mean annual air temperature as an important distributional predictor. Phylogenetic signal was supported for both natives and nonnatives in both basins. Lastly, phylogenetic attraction was only supported for native fishes in the LCRB and for nonnative fishes in the UCRB. Our results indicated that functional similarity was heavily influenced by evolutionary history, but that phylogenetic relationships and functional traits may not always predict the environmental distribution of species. However, the

  15. Research on Kalman Filtering Algorithm for Deformation Information Series of Similar Single-Difference Model

    Institute of Scientific and Technical Information of China (English)

    L(U) Wei-cai; XU Shao-quan

    2004-01-01

    Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.

  16. Predictive modeling of human perception subjectivity: feasibility study of mammographic lesion similarity

    Science.gov (United States)

    Xu, Songhua; Hudson, Kathleen; Bradley, Yong; Daley, Brian J.; Frederick-Dyer, Katherine; Tourassi, Georgia

    2012-02-01

    The majority of clinical content-based image retrieval (CBIR) studies disregard human perception subjectivity, aiming to duplicate the consensus expert assessment of the visual similarity on example cases. The purpose of our study is twofold: i) discern better the extent of human perception subjectivity when assessing the visual similarity of two images with similar semantic content, and (ii) explore the feasibility of personalized predictive modeling of visual similarity. We conducted a human observer study in which five observers of various expertise were shown ninety-nine triplets of mammographic masses with similar BI-RADS descriptors and were asked to select the two masses with the highest visual relevance. Pairwise agreement ranged between poor and fair among the five observers, as assessed by the kappa statistic. The observers' self-consistency rate was remarkably low, based on repeated questions where either the orientation or the presentation order of a mass was changed. Various machine learning algorithms were explored to determine whether they can predict each observer's personalized selection using textural features. Many algorithms performed with accuracy that exceeded each observer's self-consistency rate, as determined using a cross-validation scheme. This accuracy was statistically significantly higher than would be expected by chance alone (two-tailed p-value ranged between 0.001 and 0.01 for all five personalized models). The study confirmed that human perception subjectivity should be taken into account when developing CBIR-based medical applications.

  17. Cosmological model with anisotropic dark energy and self-similarity of the second kind

    International Nuclear Information System (INIS)

    Brandt, Carlos F. Charret; Silva, Maria de Fatima A. da; Rocha, Jaime F. Villas da; Chan, Roberto

    2006-01-01

    We study the evolution of an anisotropic fluid with self-similarity of the second kind. We found a class of solution to the Einstein field equations by assuming an equation of state where the radial pressure of the fluid is proportional to its energy density (p r =ωρ) and that the fluid moves along time-like geodesics. The equation of state and the anisotropy with self-similarity of second kind imply ω = -1. The energy conditions, geometrical and physical properties of the solutions are studied. We have found that for the parameter α=-1/2 , it may represent a Big Rip cosmological model. (author)

  18. On the scale similarity in large eddy simulation. A proposal of a new model

    International Nuclear Information System (INIS)

    Pasero, E.; Cannata, G.; Gallerano, F.

    2004-01-01

    Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The

  19. Conservation of connectivity of model-space effective interactions under a class of similarity transformation

    International Nuclear Information System (INIS)

    Duan Changkui; Gong Yungui; Dong Huining; Reid, Michael F.

    2004-01-01

    Effective interaction operators usually act on a restricted model space and give the same energies (for Hamiltonian) and matrix elements (for transition operators, etc.) as those of the original operators between the corresponding true eigenstates. Various types of effective operators are possible. Those well defined effective operators have been shown to be related to each other by similarity transformation. Some of the effective operators have been shown to have connected-diagram expansions. It is shown in this paper that under a class of very general similarity transformations, the connectivity is conserved. The similarity transformation between Hermitian and non-Hermitian Rayleigh-Schroedinger perturbative effective operators is one of such transformations and hence the connectivity can be deducted from each other

  20. Conservation of connectivity of model-space effective interactions under a class of similarity transformation.

    Science.gov (United States)

    Duan, Chang-Kui; Gong, Yungui; Dong, Hui-Ning; Reid, Michael F

    2004-09-15

    Effective interaction operators usually act on a restricted model space and give the same energies (for Hamiltonian) and matrix elements (for transition operators, etc.) as those of the original operators between the corresponding true eigenstates. Various types of effective operators are possible. Those well defined effective operators have been shown to be related to each other by similarity transformation. Some of the effective operators have been shown to have connected-diagram expansions. It is shown in this paper that under a class of very general similarity transformations, the connectivity is conserved. The similarity transformation between Hermitian and non-Hermitian Rayleigh-Schrodinger perturbative effective operators is one of such transformations and hence the connectivity can be deducted from each other.

  1. Model-free aftershock forecasts constructed from similar sequences in the past

    Science.gov (United States)

    van der Elst, N.; Page, M. T.

    2017-12-01

    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity

  2. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Directory of Open Access Journals (Sweden)

    Jin Dai

    2014-01-01

    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  3. Improvements of evaporation drag model

    International Nuclear Information System (INIS)

    Li Xiaoyan; Yang Yanhua; Xu Jijun

    2004-01-01

    A special observable experiment facility has been established, and a series of experiments have been carried out on this facility by pouring one or several high-temperature particles into a water pool. The experiment has verified the evaporation drag model, which believe the non-symmetric profile of the local evaporation rate and the local density of the vapor would bring about a resultant force on the hot particle so as to resist its motion. However, in Yang's evaporation drag model, radiation heat transfer is taken as the only way to transfer heat from hot particle to the vapor-liquid interface and all of the radiation energy is deposited on the vapor-liquid interface, thus contributing to the vaporization rate and mass balance of the vapor film. So, the heat conduction and the heat convection are taken into account in improved model. At the same time, the improved model given by this paper presented calculations of the effect of hot particles temperature on the radiation absorption behavior of water

  4. Improvements in ECN Wake Model

    Energy Technology Data Exchange (ETDEWEB)

    Versteeg, M.C. [University of Twente, Enschede (Netherlands); Ozdemir, H.; Brand, A.J. [ECN Wind Energy, Petten (Netherlands)

    2013-08-15

    Wind turbines extract energy from the flow field so that the flow in the wake of a wind turbine contains less energy and more turbulence than the undisturbed flow, leading to less energy extraction for the downstream turbines. In large wind farms, most turbines are located in the wake of one or more turbines causing the flow characteristics felt by these turbines differ considerably from the free stream flow conditions. The most important wake effect is generally considered to be the lower wind speed behind the turbine(s) since this decreases the energy production and as such the economical performance of a wind farm. The overall loss of a wind farm is very much dependent on the conditions and the lay-out of the farm but it can be in the order of 5-10%. Apart from the loss in energy production an additional wake effect is formed by the increase in turbulence intensity, which leads to higher fatigue loads. In this sense it becomes important to understand the details of wake behavior to improve and/or optimize a wind farm layout. Within this study improvements are presented for the existing ECN wake model which constructs the fundamental basis of ECN's FarmFlow wind farm wake simulation tool. The outline of this paper is as follows: first, the governing equations of the ECN wake farm model are presented. Then the near wake modeling is discussed and the results compared with the original near wake modeling and EWTW (ECN Wind Turbine Test Site Wieringermeer) data as well as the results obtained for various near wake implementation cases are shown. The details of the atmospheric stability model are given and the comparison with the solution obtained for the original surface layer model and with the available data obtained by EWTW measurements are presented. Finally the conclusions are summarized.

  5. An optimization model for improving highway safety

    Directory of Open Access Journals (Sweden)

    Promothes Saha

    2016-12-01

    Full Text Available This paper developed a traffic safety management system (TSMS for improving safety on county paved roads in Wyoming. TSMS is a strategic and systematic process to improve safety of roadway network. When funding is limited, it is important to identify the best combination of safety improvement projects to provide the most benefits to society in terms of crash reduction. The factors included in the proposed optimization model are annual safety budget, roadway inventory, roadway functional classification, historical crashes, safety improvement countermeasures, cost and crash reduction factors (CRFs associated with safety improvement countermeasures, and average daily traffics (ADTs. This paper demonstrated how the proposed model can identify the best combination of safety improvement projects to maximize the safety benefits in terms of reducing overall crash frequency. Although the proposed methodology was implemented on the county paved road network of Wyoming, it could be easily modified for potential implementation on the Wyoming state highway system. Other states can also benefit by implementing a similar program within their jurisdictions.

  6. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  7. A self-similar model for conduction in the plasma erosion opening switch

    International Nuclear Information System (INIS)

    Mosher, D.; Grossmann, J.M.; Ottinger, P.F.; Colombant, D.G.

    1987-01-01

    The conduction phase of the plasma erosion opening switch (PEOS) is characterized by combining a 1-D fluid model for plasma hydrodynamics, Maxwell's equations, and a 2-D electron-orbit analysis. A self-similar approximation for the plasma and field variables permits analytic expressions for their space and time variations to be derived. It is shown that a combination of axial MHD compression and magnetic insulation of high-energy electrons emitted from the switch cathode can control the character of switch conduction. The analysis highlights the need to include additional phenomena for accurate fluid modeling of PEOS conduction

  8. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  9. State impulsive control strategies for a two-languages competitive model with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Nie, Lin-Fei; Teng, Zhi-Dong; Nieto, Juan J.; Jung, Il Hyo

    2015-07-01

    For reasons of preserving endangered languages, we propose, in this paper, a novel two-languages competitive model with bilingualism and interlinguistic similarity, where state-dependent impulsive control strategies are introduced. The novel control model includes two control threshold values, which are different from the previous state-dependent impulsive differential equations. By using qualitative analysis method, we obtain that the control model exhibits two stable positive order-1 periodic solutions under some general conditions. Moreover, numerical simulations clearly illustrate the main theoretical results and feasibility of state-dependent impulsive control strategies. Meanwhile numerical simulations also show that state-dependent impulsive control strategy can be applied to other general two-languages competitive model and obtain the desired result. The results indicate that the fractions of two competitive languages can be kept within a reasonable level under almost any circumstances. Theoretical basis for finding a new control measure to protect the endangered language is offered.

  10. Towards predictive resistance models for agrochemicals by combining chemical and protein similarity via proteochemometric modelling.

    Science.gov (United States)

    van Westen, Gerard J P; Bender, Andreas; Overington, John P

    2014-10-01

    Resistance to pesticides is an increasing problem in agriculture. Despite practices such as phased use and cycling of 'orthogonally resistant' agents, resistance remains a major risk to national and global food security. To combat this problem, there is a need for both new approaches for pesticide design, as well as for novel chemical entities themselves. As summarized in this opinion article, a technique termed 'proteochemometric modelling' (PCM), from the field of chemoinformatics, could aid in the quantification and prediction of resistance that acts via point mutations in the target proteins of an agent. The technique combines information from both the chemical and biological domain to generate bioactivity models across large numbers of ligands as well as protein targets. PCM has previously been validated in prospective, experimental work in the medicinal chemistry area, and it draws on the growing amount of bioactivity information available in the public domain. Here, two potential applications of proteochemometric modelling to agrochemical data are described, based on previously published examples from the medicinal chemistry literature.

  11. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  12. Assessing the similarity of mental models of operating room team members and implications for patient safety: a prospective, replicated study.

    Science.gov (United States)

    Nakarada-Kordic, Ivana; Weller, Jennifer M; Webster, Craig S; Cumin, David; Frampton, Christopher; Boyd, Matt; Merry, Alan F

    2016-08-31

    Patient safety depends on effective teamwork. The similarity of team members' mental models - or their shared understanding-regarding clinical tasks is likely to influence the effectiveness of teamwork. Mental models have not been measured in the complex, high-acuity environment of the operating room (OR), where professionals of different backgrounds must work together to achieve the best surgical outcome for each patient. Therefore, we aimed to explore the similarity of mental models of task sequence and of responsibility for task within multidisciplinary OR teams. We developed a computer-based card sorting tool (Momento) to capture the information on mental models in 20 six-person surgical teams, each comprised of three subteams (anaesthesia, surgery, and nursing) for two simulated laparotomies. Team members sorted 20 cards depicting key tasks according to when in the procedure each task should be performed, and which subteam was primarily responsible for each task. Within each OR team and subteam, we conducted pairwise comparisons of scores to arrive at mean similarity scores for each task. Mean similarity score for task sequence was 87 % (range 57-97 %). Mean score for responsibility for task was 70 % (range = 38-100 %), but for half of the tasks was only 51 % (range = 38-69 %). Participants believed their own subteam was primarily responsible for approximately half the tasks in each procedure. We found differences in the mental models of some OR team members about responsibility for and order of certain tasks in an emergency laparotomy. Momento is a tool that could help elucidate and better align the mental models of OR team members about surgical procedures and thereby improve teamwork and outcomes for patients.

  13. Scaling and interaction of self-similar modes in models of high Reynolds number wall turbulence.

    Science.gov (United States)

    Sharma, A S; Moarref, R; McKeon, B J

    2017-03-13

    Previous work has established the usefulness of the resolvent operator that maps the terms nonlinear in the turbulent fluctuations to the fluctuations themselves. Further work has described the self-similarity of the resolvent arising from that of the mean velocity profile. The orthogonal modes provided by the resolvent analysis describe the wall-normal coherence of the motions and inherit that self-similarity. In this contribution, we present the implications of this similarity for the nonlinear interaction between modes with different scales and wall-normal locations. By considering the nonlinear interactions between modes, it is shown that much of the turbulence scaling behaviour in the logarithmic region can be determined from a single arbitrarily chosen reference plane. Thus, the geometric scaling of the modes is impressed upon the nonlinear interaction between modes. Implications of these observations on the self-sustaining mechanisms of wall turbulence, modelling and simulation are outlined.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  14. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  15. Improved modelling of independent parton hadronization

    International Nuclear Information System (INIS)

    Biddulph, P.; Thompson, G.

    1989-01-01

    A modification is proposed to current versions of the Field-Feynman ansatz for the hadronization of a quark in Monte Carlo models of QCD interactions. This faster-running algorithm has no more parameters and imposes a better degree of energy conservation. It results in naturally introducing a limitation of the transverse momentum distribution, similar to the experimentally observed ''seagull'' effect. There is now a much improved conservation of quantum numbers between the original parton and resultant hadrons, and the momentum of the emitted parton is better preserved in the summed momentum vectors of the final state particles. (orig.)

  16. Self-similar measures in multi-sector endogenous growth models

    International Nuclear Information System (INIS)

    La Torre, Davide; Marsiglio, Simone; Mendivil, Franklin; Privileggi, Fabio

    2015-01-01

    We analyze two types of stochastic discrete time multi-sector endogenous growth models, namely a basic Uzawa–Lucas (1965, 1988) model and an extended three-sector version as in La Torre and Marsiglio (2010). As in the case of sustained growth the optimal dynamics of the state variables are not stationary, we focus on the dynamics of the capital ratio variables, and we show that, through appropriate log-transformations, they can be converted into affine iterated function systems converging to an invariant distribution supported on some (possibly fractal) compact set. This proves that also the steady state of endogenous growth models—i.e., the stochastic balanced growth path equilibrium—might have a fractal nature. We also provide some sufficient conditions under which the associated self-similar measures turn out to be either singular or absolutely continuous (for the three-sector model we only consider the singularity).

  17. Self-similar formation of the Kolmogorov spectrum in the Leith model of turbulence

    International Nuclear Information System (INIS)

    Nazarenko, S V; Grebenev, V N

    2017-01-01

    The last stage of evolution toward the stationary Kolmogorov spectrum of hydrodynamic turbulence is studied using the Leith model [1]. This evolution is shown to manifest itself as a reflection wave in the wavenumber space propagating from the largest toward the smallest wavenumbers, and is described by a self-similar solution of a new (third) kind. This stage follows the previously studied stage of an initial explosive propagation of the spectral front from the smallest to the largest wavenumbers reaching arbitrarily large wavenumbers in a finite time, and which was described by a self-similar solution of the second kind [2–4]. Nonstationary solutions corresponding to ‘warm cascades’ characterised by a thermalised spectrum at large wavenumbers are also obtained. (paper)

  18. A solvable self-similar model of the sausage instability in a resistive Z pinch

    International Nuclear Information System (INIS)

    Lampe, M.

    1991-01-01

    A solvable model is developed for the linearized sausage mode within the context of resistive magnetohydrodynamics. The model is based on the assumption that the fluid motion of the plasma is self-similar, as well as several assumptions pertinent to the limit of wavelength long compared to the pinch radius. The perturbations to the magnetic field are not assumed to be self-similar, but rather are calculated. Effects arising from time dependences of the z-independent perturbed state, e.g., current rising as t α , Ohmic heating, and time variation of the pinch radius, are included in the analysis. The formalism appears to provide a good representation of ''global'' modes that involve coherent sausage distortion of the entire cross section of the pinch, but excludes modes that are localized radially, and higher radial eigenmodes. For this and other reasons, it is expected that the model underestimates the maximum instability growth rates, but is reasonable for global sausage modes. The net effect of resistivity and time variation of the unperturbed state is to decrease the growth rate if α approx-lt 1, but never by more than a factor of about 2. The effect is to increase the growth rate if α approx-gt 1

  19. A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities

    Science.gov (United States)

    Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.

    2017-12-01

    The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.

  20. Initial virtual flight test for a dynamically similar aircraft model with control augmentation system

    Directory of Open Access Journals (Sweden)

    Linliang Guo

    2017-04-01

    Full Text Available To satisfy the validation requirements of flight control law for advanced aircraft, a wind tunnel based virtual flight testing has been implemented in a low speed wind tunnel. A 3-degree-of-freedom gimbal, ventrally installed in the model, was used in conjunction with an actively controlled dynamically similar model of aircraft, which was equipped with the inertial measurement unit, attitude and heading reference system, embedded computer and servo-actuators. The model, which could be rotated around its center of gravity freely by the aerodynamic moments, together with the flow field, operator and real time control system made up the closed-loop testing circuit. The model is statically unstable in longitudinal direction, and it can fly stably in wind tunnel with the function of control augmentation of the flight control laws. The experimental results indicate that the model responds well to the operator’s instructions. The response of the model in the tests shows reasonable agreement with the simulation results. The difference of response of angle of attack is less than 0.5°. The effect of stability augmentation and attitude control law was validated in the test, meanwhile the feasibility of virtual flight test technique treated as preliminary evaluation tool for advanced flight vehicle configuration research was also verified.

  1. PubMed-supported clinical term weighting approach for improving inter-patient similarity measure in diagnosis prediction.

    Science.gov (United States)

    Chan, Lawrence Wc; Liu, Ying; Chan, Tao; Law, Helen Kw; Wong, S C Cesar; Yeung, Andy Ph; Lo, K F; Yeung, S W; Kwok, K Y; Chan, William Yl; Lau, Thomas Yh; Shyu, Chi-Ren

    2015-06-02

    Similarity-based retrieval of Electronic Health Records (EHRs) from large clinical information systems provides physicians the evidence support in making diagnoses or referring examinations for the suspected cases. Clinical Terms in EHRs represent high-level conceptual information and the similarity measure established based on these terms reflects the chance of inter-patient disease co-occurrence. The assumption that clinical terms are equally relevant to a disease is unrealistic, reducing the prediction accuracy. Here we propose a term weighting approach supported by PubMed search engine to address this issue. We collected and studied 112 abdominal computed tomography imaging examination reports from four hospitals in Hong Kong. Clinical terms, which are the image findings related to hepatocellular carcinoma (HCC), were extracted from the reports. Through two systematic PubMed search methods, the generic and specific term weightings were established by estimating the conditional probabilities of clinical terms given HCC. Each report was characterized by an ontological feature vector and there were totally 6216 vector pairs. We optimized the modified direction cosine (mDC) with respect to a regularization constant embedded into the feature vector. Equal, generic and specific term weighting approaches were applied to measure the similarity of each pair and their performances for predicting inter-patient co-occurrence of HCC diagnoses were compared by using Receiver Operating Characteristics (ROC) analysis. The Areas under the curves (AUROCs) of similarity scores based on equal, generic and specific term weighting approaches were 0.735, 0.728 and 0.743 respectively (p PubMed. Our findings suggest that the optimized similarity measure with specific term weighting to EHRs can improve significantly the accuracy for predicting the inter-patient co-occurrence of diagnosis when compared with equal and generic term weighting approaches.

  2. Assessing intrinsic and specific vulnerability models ability to indicate groundwater vulnerability to groups of similar pesticides: A comparative study

    Science.gov (United States)

    Douglas, Steven; Dixon, Barnali; Griffin, Dale W.

    2018-01-01

    With continued population growth and increasing use of fresh groundwater resources, protection of this valuable resource is critical. A cost effective means to assess risk of groundwater contamination potential will provide a useful tool to protect these resources. Integrating geospatial methods offers a means to quantify the risk of contaminant potential in cost effective and spatially explicit ways. This research was designed to compare the ability of intrinsic (DRASTIC) and specific (Attenuation Factor; AF) vulnerability models to indicate groundwater vulnerability areas by comparing model results to the presence of pesticides from groundwater sample datasets. A logistic regression was used to assess the relationship between the environmental variables and the presence or absence of pesticides within regions of varying vulnerability. According to the DRASTIC model, more than 20% of the study area is very highly vulnerable. Approximately 30% is very highly vulnerable according to the AF model. When groundwater concentrations of individual pesticides were compared to model predictions, the results were mixed. Model predictability improved when concentrations of the group of similar pesticides were compared to model results. Compared to the DRASTIC model, the AF model more accurately predicts the distribution of the number of contaminated wells within each vulnerability class.

  3. Vertex labeling and routing in self-similar outerplanar unclustered graphs modeling complex networks

    International Nuclear Information System (INIS)

    Comellas, Francesc; Miralles, Alicia

    2009-01-01

    This paper introduces a labeling and optimal routing algorithm for a family of modular, self-similar, small-world graphs with clustering zero. Many properties of this family are comparable to those of networks associated with technological and biological systems with low clustering, such as the power grid, some electronic circuits and protein networks. For these systems, the existence of models with an efficient routing protocol is of interest to design practical communication algorithms in relation to dynamical processes (including synchronization) and also to understand the underlying mechanisms that have shaped their particular structure.

  4. Self-similarities of periodic structures for a discrete model of a two-gene system

    International Nuclear Information System (INIS)

    Souza, S.L.T. de; Lima, A.A.; Caldas, I.L.; Medrano-T, R.O.; Guimarães-Filho, Z.O.

    2012-01-01

    We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.

  5. Self-similarities of periodic structures for a discrete model of a two-gene system

    Energy Technology Data Exchange (ETDEWEB)

    Souza, S.L.T. de, E-mail: thomaz@ufsj.edu.br [Departamento de Física e Matemática, Universidade Federal de São João del-Rei, Ouro Branco, MG (Brazil); Lima, A.A. [Escola de Farmácia, Universidade Federal de Ouro Preto, Ouro Preto, MG (Brazil); Caldas, I.L. [Instituto de Física, Universidade de São Paulo, São Paulo, SP (Brazil); Medrano-T, R.O. [Departamento de Ciências Exatas e da Terra, Universidade Federal de São Paulo, Diadema, SP (Brazil); Guimarães-Filho, Z.O. [Aix-Marseille Univ., CNRS PIIM UMR6633, International Institute for Fusion Science, Marseille (France)

    2012-03-12

    We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.

  6. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li

    2011-01-01

    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  7. Individual differences in emotion processing: how similar are diffusion model parameters across tasks?

    Science.gov (United States)

    Mueller, Christina J; White, Corey N; Kuchinke, Lars

    2017-11-27

    The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.

  8. Feasibility of similarity coefficient map for improving morphological evaluation of T2* weighted MRI for renal cancer

    International Nuclear Information System (INIS)

    Wang Hao-Yu; Bao Shang-Lian; Jiani Hu; Meng Li; Haacke, E. M.; Xie Yao-Qin; Chen Jie; Amy Yu; Wei Xin-Hua; Dai Yong-Ming

    2013-01-01

    The purpose of this paper is to investigate the feasibility of using a similarity coefficient map (SCM) in improving the morphological evaluation of T 2 * weighted (T 2 *W) magnatic resonance imaging (MRI) for renal cancer. Simulation studies and in vivo 12-echo T 2 *W experiments for renal cancers were performed for this purpose. The results of the first simulation study suggest that an SCM can reveal small structures which are hard to distinguish from the background tissue in T 2 *W images and the corresponding T 2 * map. The capability of improving the morphological evaluation is likely due to the improvement in the signal-to-noise ratio (SNR) and the carrier-to-noise ratio (CNR) by using the SCM technique. Compared with T 2 *W images, an SCM can improve the SNR by a factor ranging from 1.87 to 2.47. Compared with T 2 * maps, an SCM can improve the SNR by a factor ranging from 3.85 to 33.31. Compared with T 2 *W images, an SCM can improve the CNR by a factor ranging from 2.09 to 2.43. Compared with T 2 * maps, an SCM can improve the CNR by a factor ranging from 1.94 to 8.14. For a given noise level, the improvements of the SNR and the CNR depend mainly on the original SNRs and CNRs in T 2 *W images, respectively. In vivo experiments confirmed the results of the first simulation study. The results of the second simulation study suggest that more echoes are used to generate the SCM, and higher SNRs and CNRs can be achieved in SCMs. In conclusion, an SCM can provide improved morphological evaluation of T 2 *W MR images for renal cancer by unveiling fine structures which are ambiguous or invisible in the corresponding T 2 *W MR images and T 2 * maps. Furthermore, in practical applications, for a fixed total sampling time, one should increase the number of echoes as much as possible to achieve SCMs with better SNRs and CNRs

  9. Analysis and Modeling of Time-Correlated Characteristics of Rainfall-Runoff Similarity in the Upstream Red River Basin

    Directory of Open Access Journals (Sweden)

    Xiuli Sang

    2012-01-01

    Full Text Available We constructed a similarity model (based on Euclidean distance between rainfall and runoff to study time-correlated characteristics of rainfall-runoff similar patterns in the upstream Red River Basin and presented a detailed evaluation of the time correlation of rainfall-runoff similarity. The rainfall-runoff similarity was used to determine the optimum similarity. The results showed that a time-correlated model was found to be capable of predicting the rainfall-runoff similarity in the upstream Red River Basin in a satisfactory way. Both noised and denoised time series by thresholding the wavelet coefficients were applied to verify the accuracy of model. And the corresponding optimum similar sets obtained as the equation solution conditions showed an interesting and stable trend. On the whole, the annual mean similarity presented a gradually rising trend, for quantitatively estimating comprehensive influence of climate change and of human activities on rainfall-runoff similarity.

  10. Uncovering highly obfuscated plagiarism cases using fuzzy semantic-based similarity model

    Directory of Open Access Journals (Sweden)

    Salha M. Alzahrani

    2015-07-01

    Full Text Available Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and suspicious texts of short lengths, which implement the semantic relatedness between words as a membership function to a fuzzy set. In order to minimize the number of false positives and false negatives, a learning method that combines a permission threshold and a variation threshold is used to decide true plagiarism cases. The proposed model and the baselines are evaluated on 99,033 ground-truth annotated cases extracted from different datasets, including 11,621 (11.7% handmade paraphrases, 54,815 (55.4% artificial plagiarism cases, and 32,578 (32.9% plagiarism-free cases. We conduct extensive experimental verifications, including the study of the effects of different segmentations schemes and parameter settings. Results are assessed using precision, recall, F-measure and granularity on stratified 10-fold cross-validation data. The statistical analysis using paired t-tests shows that the proposed approach is statistically significant in comparison with the baselines, which demonstrates the competence of fuzzy semantic-based model to detect plagiarism cases beyond the literal plagiarism. Additionally, the analysis of variance (ANOVA statistical test shows the effectiveness of different segmentation schemes used with the proposed approach.

  11. An analytic solution of a model of language competition with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Otero-Espinar, M. V.; Seoane, L. F.; Nieto, J. J.; Mira, J.

    2013-12-01

    An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also suggest ways to further improve the model and facilitate the comparison of its consequences with those from other approaches or with real data.

  12. The synaptonemal complex of basal metazoan hydra: more similarities to vertebrate than invertebrate meiosis model organisms.

    Science.gov (United States)

    Fraune, Johanna; Wiesner, Miriam; Benavente, Ricardo

    2014-03-20

    The synaptonemal complex (SC) is an evolutionarily well-conserved structure that mediates chromosome synapsis during prophase of the first meiotic division. Although its structure is conserved, the characterized protein components in the current metazoan meiosis model systems (Drosophila melanogaster, Caenorhabditis elegans, and Mus musculus) show no sequence homology, challenging the question of a single evolutionary origin of the SC. However, our recent studies revealed the monophyletic origin of the mammalian SC protein components. Many of them being ancient in Metazoa and already present in the cnidarian Hydra. Remarkably, a comparison between different model systems disclosed a great similarity between the SC components of Hydra and mammals while the proteins of the ecdysozoan systems (D. melanogaster and C. elegans) differ significantly. In this review, we introduce the basal-branching metazoan species Hydra as a potential novel invertebrate model system for meiosis research and particularly for the investigation of SC evolution, function and assembly. Also, available methods for SC research in Hydra are summarized. Copyright © 2014. Published by Elsevier Ltd.

  13. Stereotype content model across cultures: Towards universal similarities and some differences

    Science.gov (United States)

    Cuddy, Amy J. C.; Fiske, Susan T.; Kwan, Virginia S. Y.; Glick, Peter; Demoulin, Stéphanie; Leyens, Jacques-Philippe; Bond, Michael Harris; Croizet, Jean-Claude; Ellemers, Naomi; Sleebos, Ed; Htun, Tin Tin; Kim, Hyun-Jeong; Maio, Greg; Perry, Judi; Petkova, Kristina; Todorov, Valery; Rodríguez-Bailón, Rosa; Morales, Elena; Moya, Miguel; Palacios, Marisol; Smith, Vanessa; Perez, Rolando; Vala, Jorge; Ziegler, Rene

    2014-01-01

    The stereotype content model (SCM) proposes potentially universal principles of societal stereotypes and their relation to social structure. Here, the SCM reveals theoretically grounded, cross-cultural, cross-groups similarities and one difference across 10 non-US nations. Seven European (individualist) and three East Asian (collectivist) nations (N = 1, 028) support three hypothesized cross-cultural similarities: (a) perceived warmth and competence reliably differentiate societal group stereotypes; (b) many out-groups receive ambivalent stereotypes (high on one dimension; low on the other); and (c) high status groups stereotypically are competent, whereas competitive groups stereotypically lack warmth. Data uncover one consequential cross-cultural difference: (d) the more collectivist cultures do not locate reference groups (in-groups and societal prototype groups) in the most positive cluster (high-competence/high-warmth), unlike individualist cultures. This demonstrates out-group derogation without obvious reference-group favouritism. The SCM can serve as a pancultural tool for predicting group stereotypes from structural relations with other groups in society, and comparing across societies. PMID:19178758

  14. Similar Biophysical Abnormalities in Glomeruli and Podocytes from Two Distinct Models.

    Science.gov (United States)

    Embry, Addie E; Liu, Zhenan; Henderson, Joel M; Byfield, F Jefferson; Liu, Liping; Yoon, Joonho; Wu, Zhenzhen; Cruz, Katrina; Moradi, Sara; Gillombardo, C Barton; Hussain, Rihanna Z; Doelger, Richard; Stuve, Olaf; Chang, Audrey N; Janmey, Paul A; Bruggeman, Leslie A; Miller, R Tyler

    2018-03-23

    Background FSGS is a pattern of podocyte injury that leads to loss of glomerular function. Podocytes support other podocytes and glomerular capillary structure, oppose hemodynamic forces, form the slit diaphragm, and have mechanical properties that permit these functions. However, the biophysical characteristics of glomeruli and podocytes in disease remain unclear. Methods Using microindentation, atomic force microscopy, immunofluorescence microscopy, quantitative RT-PCR, and a three-dimensional collagen gel contraction assay, we studied the biophysical and structural properties of glomeruli and podocytes in chronic (Tg26 mice [HIV protein expression]) and acute (protamine administration [cytoskeletal rearrangement]) models of podocyte injury. Results Compared with wild-type glomeruli, Tg26 glomeruli became progressively more deformable with disease progression, despite increased collagen content. Tg26 podocytes had disordered cytoskeletons, markedly abnormal focal adhesions, and weaker adhesion; they failed to respond to mechanical signals and exerted minimal traction force in three-dimensional collagen gels. Protamine treatment had similar but milder effects on glomeruli and podocytes. Conclusions Reduced structural integrity of Tg26 podocytes causes increased deformability of glomerular capillaries and limits the ability of capillaries to counter hemodynamic force, possibly leading to further podocyte injury. Loss of normal podocyte mechanical integrity could injure neighboring podocytes due to the absence of normal biophysical signals required for podocyte maintenance. The severe defects in podocyte mechanical behavior in the Tg26 model may explain why Tg26 glomeruli soften progressively, despite increased collagen deposition, and may be the basis for the rapid course of glomerular diseases associated with severe podocyte injury. In milder injury (protamine), similar processes occur but over a longer time. Copyright © 2018 by the American Society of Nephrology.

  15. Improving performance of content-based image retrieval schemes in searching for similar breast mass regions: an assessment

    International Nuclear Information System (INIS)

    Wang Xiaohui; Park, Sang Cheol; Zheng Bin

    2009-01-01

    This study aims to assess three methods commonly used in content-based image retrieval (CBIR) schemes and investigate the approaches to improve scheme performance. A reference database involving 3000 regions of interest (ROIs) was established. Among them, 400 ROIs were randomly selected to form a testing dataset. Three methods, namely mutual information, Pearson's correlation and a multi-feature-based k-nearest neighbor (KNN) algorithm, were applied to search for the 15 'the most similar' reference ROIs to each testing ROI. The clinical relevance and visual similarity of searching results were evaluated using the areas under receiver operating characteristic (ROC) curves (A Z ) and average mean square difference (MSD) of the mass boundary spiculation level ratings between testing and selected ROIs, respectively. The results showed that the A Z values were 0.893 ± 0.009, 0.606 ± 0.021 and 0.699 ± 0.026 for the use of KNN, mutual information and Pearson's correlation, respectively. The A Z values increased to 0.724 ± 0.017 and 0.787 ± 0.016 for mutual information and Pearson's correlation when using ROIs with the size adaptively adjusted based on actual mass size. The corresponding MSD values were 2.107 ± 0.718, 2.301 ± 0.733 and 2.298 ± 0.743. The study demonstrates that due to the diversity of medical images, CBIR schemes using multiple image features and mass size-based ROIs can achieve significantly improved performance.

  16. More Similar than Different? Exploring Cultural Models of Depression among Latino Immigrants in Florida

    Directory of Open Access Journals (Sweden)

    Dinorah (Dina Martinez Tyson

    2011-01-01

    Full Text Available The Surgeon General's report, “Culture, Race, and Ethnicity: A Supplement to Mental Health,” points to the need for subgroup specific mental health research that explores the cultural variation and heterogeneity of the Latino population. Guided by cognitive anthropological theories of culture, we utilized ethnographic interviewing techniques to explore cultural models of depression among foreign-born Mexican (n=30, Cuban (n=30, Columbian (n=30, and island-born Puerto Ricans (n=30, who represent the largest Latino groups in Florida. Results indicate that Colombian, Cuban, Mexican, and Puerto Rican immigrants showed strong intragroup consensus in their models of depression causality, symptoms, and treatment. We found more agreement than disagreement among all four groups regarding core descriptions of depression, which was largely unexpected but can potentially be explained by their common immigrant experiences. Findings expand our understanding about Latino subgroup similarities and differences in their conceptualization of depression and can be used to inform the adaptation of culturally relevant interventions in order to better serve Latino immigrant communities.

  17. Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.

    Directory of Open Access Journals (Sweden)

    Octavio Miramontes

    Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.

  18. Propriedades termofísicas de soluções-modelo similares a sucos: parte II Thermophysical properties of model solutions similar to juice: part II

    Directory of Open Access Journals (Sweden)

    Sílvia Cristina Sobottka Rolim de Moura

    2005-09-01

    Full Text Available Propriedades termofísicas, densidade e viscosidade de soluções-modelo similares a sucos foram determinadas experimentalmente. Os resultados foram comparados aos preditos por modelos matemáticos (STATISTICA 6.0 e obtidos da literatura em função da sua composição química. Para definição das soluções-modelo, foi realizado um planejamento estrela, mantendo-se fixa a quanti-dade de ácido (1,5% e variando-se a água (82-98,5%, o carboidrato (0-15% e a gordura (0-1,5%. A densidade foi determinada em picnômetro. A viscosidade foi determinada em viscosímetro Brookfield modelo LVF. A condutividade térmica foi calculada com o conhecimento das propriedades difusividade térmica e calor específico (apresentados na Parte I deste trabalho MOURA [7] e da densidade. Os resultados de cada propriedade foram analisados através de superfícies de respostas. Foram encontrados resultados significativos para as propriedades, mostrando que os modelos encontrados representam as mudanças das propriedades térmicas e físicas dos sucos, com alterações na composição e na temperatura.Thermophysical properties, density and viscosity of model solutions similar to juices were experimentally determined. The results were compared to those predicted by mathematical models (STATISTIC 6.0 and to values mentioned in the literature, according to the chemical composition. A star planning was adopted to define model solutions composition; fixing the acid amount in 1.5% and varying water (82-98.5%, carbohydrate (0-15% and fat (0-1.5%. The density was determined by picnometer. The viscosity was determined by Brookfield LVF model viscosimeter. The thermal conductivity was calculated based on thermal diffusivity and specific heat values (presented at the 1st . Part of this paper - MOURA [7] and density. The results of each property were analyzed by the response surface method. The found results were significant, indicating that the models represent the changes of

  19. Propriedades termofísicas de soluções modelo similares a sucos - Parte I Thermophysical properties of model solutions similar to juice - Part I

    Directory of Open Access Journals (Sweden)

    Silvia Cristina Sobottka Rolim de Moura

    2003-04-01

    Full Text Available Propriedades termofísicas, difusividade térmica e calor específico, de soluções modelo similares a sucos, foram determinadas experimentalmente e ajustadas a modelos matemáticos (STATISTICA 6.0, em função da sua composição química. Para definição das soluções modelo foi realizado um planejamento estrela mantendo-se fixa a quantidade de ácido (1,5% e variando-se a água (82-98,5%, o carboidrato (0-15% e a gordura (0-1,5%. A determinação do calor específico foi realizada através do método de Hwang & Hayakawa e a difusividade térmica com base no método de Dickerson. Os resultados de cada propriedade foram analisados através de superfícies de respostas. Foram encontrados resultados significativos para as propriedades, mostrando que os modelos encontrados representam significativamente as mudanças das propriedades térmicas dos sucos, com alterações na composição e na temperatura.Thermophysical properties, thermal diffusivity and specific heat of model solutions similar to juices were experimentally determined and the values obtained compared to those predicted by mathematical models (STATISTIC 6.0 and to values mentioned in the literature, according to the chemical composition. It was adopted a star planning to define the composition of the model solutions fixing the acid amount in 1.5% and varying water (82-98.5%, carboydrate (0-15% and fat (0-1.5%. The specific heat was determined by Hwang & Hayakawa method and the thermal diffusivity was determined by Dickerson method. The results of each property were analysed by the response surface method. The results were significative, indicating that the models represented considerably the changes of thermal properties of juices according to their composition and temperature variations.

  20. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  1. An application of superpositions of two-state Markovian sources to the modelling of self-similar behaviour

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis

    1997-01-01

    We present a modelling framework and a fitting method for modelling second order self-similar behaviour with the Markovian arrival process (MAP). The fitting method is based on fitting to the autocorrelation function of counts a second order self-similar process. It is shown that with this fittin...

  2. Vortex forcing model for turbulent flow over spanwise-heterogeneous topogrpahies: scaling arguments and similarity solution

    Science.gov (United States)

    Anderson, William; Yang, Jianzhi

    2017-11-01

    Spanwise surface heterogeneity beneath high-Reynolds number, fully-rough wall turbulence is known to induce mean secondary flows in the form of counter-rotating streamwise vortices. The secondary flows are a manifestation of Prandtl's secondary flow of the second kind - driven and sustained by spatial heterogeneity of components of the turbulent (Reynolds averaged) stress tensor. The spacing between adjacent surface heterogeneities serves as a control on the spatial extent of the counter-rotating cells, while their intensity is controlled by the spanwise gradient in imposed drag (where larger gradients associated with more dramatic transitions in roughness induce stronger cells). In this work, we have performed an order of magnitude analysis of the mean (Reynolds averaged) streamwise vorticity transport equation, revealing the scaling dependence of circulation upon spanwise spacing. The scaling arguments are supported by simulation data. Then, we demonstrate that mean streamwise velocity can be predicted a priori via a similarity solution to the mean streamwise vorticity transport equation. A vortex forcing term was used to represent the affects of spanwise topographic heterogeneity within the flow. Efficacy of the vortex forcing term was established with large-eddy simulation cases, wherein vortex forcing model parameters were altered to capture different values of spanwise spacing.

  3. Pulmonary parenchyma segmentation in thin CT image sequences with spectral clustering and geodesic active contour model based on similarity

    Science.gov (United States)

    He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan

    2017-07-01

    While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.

  4. Improved Trailing Edge Noise Model

    DEFF Research Database (Denmark)

    Bertagnolio, Franck

    2012-01-01

    The modeling of the surface pressure spectrum under a turbulent boundary layer is investigated in the presence of an adverse pressure gradient along the flow direction. It is shown that discrepancies between measurements and results from a well-known model increase as the pressure gradient increa...

  5. Improved model for statistical alignment

    Energy Technology Data Exchange (ETDEWEB)

    Miklos, I.; Toroczkai, Z. (Zoltan)

    2001-01-01

    The statistical approach to molecular sequence evolution involves the stochastic modeling of the substitution, insertion and deletion processes. Substitution has been modeled in a reliable way for more than three decades by using finite Markov-processes. Insertion and deletion, however, seem to be more difficult to model, and thc recent approaches cannot acceptably deal with multiple insertions and deletions. A new method based on a generating function approach is introduced to describe the multiple insertion process. The presented algorithm computes the approximate joint probability of two sequences in 0(13) running time where 1 is the geometric mean of the sequence lengths.

  6. Olympic weightlifting and plyometric training with children provides similar or greater performance improvements than traditional resistance training.

    Science.gov (United States)

    Chaouachi, Anis; Hammami, Raouf; Kaabi, Sofiene; Chamari, Karim; Drinkwater, Eric J; Behm, David G

    2014-06-01

    A number of organizations recommend that advanced resistance training (RT) techniques can be implemented with children. The objective of this study was to evaluate the effectiveness of Olympic-style weightlifting (OWL), plyometrics, and traditional RT programs with children. Sixty-three children (10-12 years) were randomly allocated to a 12-week control OWL, plyometric, or traditional RT program. Pre- and post-training tests included body mass index (BMI), sum of skinfolds, countermovement jump (CMJ), horizontal jump, balance, 5- and 20-m sprint times, isokinetic force and power at 60 and 300° · s(-1). Magnitude-based inferences were used to analyze the likelihood of an effect having a standardized (Cohen's) effect size exceeding 0.20. All interventions were generally superior to the control group. Olympic weightlifting was >80% likely to provide substantially better improvements than plyometric training for CMJ, horizontal jump, and 5- and 20-m sprint times, whereas >75% likely to substantially exceed traditional RT for balance and isokinetic power at 300° · s(-1). Plyometric training was >78% likely to elicit substantially better training adaptations than traditional RT for balance, isokinetic force at 60 and 300° · s(-1), isokinetic power at 300° · s(-1), and 5- and 20-m sprints. Traditional RT only exceeded plyometric training for BMI and isokinetic power at 60° · s(-1). Hence, OWL and plyometrics can provide similar or greater performance adaptations for children. It is recommended that any of the 3 training modalities can be implemented under professional supervision with proper training progressions to enhance training adaptations in children.

  7. Model-based software process improvement

    Science.gov (United States)

    Zettervall, Brenda T.

    1994-01-01

    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  8. A Model for Comparative Analysis of the Similarity between Android and iOS Operating Systems

    Directory of Open Access Journals (Sweden)

    Lixandroiu R.

    2014-12-01

    Full Text Available Due to recent expansion of mobile devices, in this article we try to do an analysis of two of the most used mobile OSS. This analysis is made on the method of calculating Jaccard's similarity coefficient. To complete the analysis, we developed a hierarchy of factors in evaluating OSS. Analysis has shown that the two OSS are similar in terms of functionality, but there are a number of factors that weighted make a difference.

  9. Ethnic differences in the effects of media on body image: the effects of priming with ethnically different or similar models.

    Science.gov (United States)

    Bruns, Gina L; Carter, Michele M

    2015-04-01

    Media exposure has been positively correlated with body dissatisfaction. While body image concerns are common, being African American has been found to be a protective factor in the development of body dissatisfaction. Participants either viewed ten advertisements showing 1) ethnically-similar thin models; 2) ethnically-different thin models; 3) ethnically-similar plus-sized models; and 4) ethnically-diverse plus-sized models. Following exposure, body image was measured. African American women had less body dissatisfaction than Caucasian women. Ethnically-similar thin-model conditions did not elicit greater body dissatisfaction scores than ethnically-different thin or plus-sized models nor did the ethnicity of the model impact ratings of body dissatisfaction for women of either race. There were no differences among the African American women exposed to plus-sized versus thin models. Among Caucasian women exposure to plus-sized models resulted in greater body dissatisfaction than exposure to thin models. Results support existing literature that African American women experience less body dissatisfaction than Caucasian women even following exposure to an ethnically-similar thin model. Additionally, women exposed to plus-sized model conditions experienced greater body dissatisfaction than those shown thin models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. PROPRIEDADES TERMOFÍSICAS DE SOLUÇÕES MODELO SIMILARES A CREME DE LEITE THERMOPHYSICAL PROPERTIES OF MODEL SOLUTIONS SIMILAR TO CREAM

    Directory of Open Access Journals (Sweden)

    Silvia Cristina Sobottka Rolim de MOURA

    2001-08-01

    Full Text Available A demanda de creme de leite UHT tem aumentado significativamente. Diversas empresas diversificaram e aumentaram sua produção, visto que o consumidor, cada vez mais exigente, almeja cremes com ampla faixa de teor de gordura. O objetivo do presente trabalho foi determinar a densidade, viscosidade aparente e difusividade térmica, de soluções modelo similares a creme de leite, na faixa de temperatura de 30 a 70°C, estudando a influência do teor de gordura e da temperatura nas propriedades físicas dos produtos. O delineamento estatístico aplicado foi o planejamento 3X5, usando níveis de teor de gordura e temperatura fixos em 15%, 25% e 35%; 30°C, 40°C, 50°C, 60°C e 70°C, respectivamente (STATISTICA 6.0. Manteve-se constante a quantidade de carboidrato e de proteína, ambos em 3%. A densidade foi determinada pelo método de deslocamento de fluidos em picnômetro; a difusividade térmica com base no método de Dickerson e a viscosidade aparente foi determinada em reômetro Rheotest 2.1. Os resultados de cada propriedade foram analisados através de método de superfície de resposta. No caso destas propriedades, os dados obtidos apresentaram resultados significativos, indicando que o modelo representou de forma confiável a variação destas propriedades com a variação da gordura (% e da temperatura (°C.The requirement of UHT cream has been increased considerably. Several industries varied and increased their production, since consumers, more and more exigent, are demanding creams with a wide range of fat content. The objective of the present research was to determine the density, viscosity and thermal diffusivity of model solutions similar to cream. The range of temperature varied from 30°C to 70°C in order to study the influence of fat content and temperature in the physical properties of cream. The statistic method applied was the factorial 3X5 planning, with levels of fat content and temperature fixed in 15%, 25% and 35%; 30

  11. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship

  12. The role of visual similarity and memory in body model distortions.

    Science.gov (United States)

    Saulton, Aurelie; Longo, Matthew R; Wong, Hong Yu; Bülthoff, Heinrich H; de la Rosa, Stephan

    2016-02-01

    Several studies have shown that the perception of one's own hand size is distorted in proprioceptive localization tasks. It has been suggested that those distortions mirror somatosensory anisotropies. Recent research suggests that non-corporeal items also show some spatial distortions. In order to investigate the psychological processes underlying the localization task, we investigated the influences of visual similarity and memory on distortions observed on corporeal and non-corporeal items. In experiment 1, participants indicated the location of landmarks on: their own hand, a rubber hand (rated as most similar to the real hand), and a rake (rated as least similar to the real hand). Results show no significant differences between rake and rubber hand distortions but both items were significantly less distorted than the hand. Experiments 2 and 3 explored the role of memory in spatial distance judgments of the hand, the rake and the rubber hand. Spatial representations of items measured in experiments 2 and 3 were also distorted but showed the tendency to be smaller than in localization tasks. While memory and visual similarity seem to contribute to explain qualitative similarities in distortions between the hand and non-corporeal items, those factors cannot explain the larger magnitude observed in hand distortions. Copyright © 2015. Published by Elsevier B.V.

  13. Improve SSME power balance model

    Science.gov (United States)

    Karr, Gerald R.

    1992-01-01

    Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.

  14. Improving Agent Based Modeling of Critical Incidents

    Directory of Open Access Journals (Sweden)

    Robert Till

    2010-04-01

    Full Text Available Agent Based Modeling (ABM is a powerful method that has been used to simulate potential critical incidents in the infrastructure and built environments. This paper will discuss the modeling of some critical incidents currently simulated using ABM and how they may be expanded and improved by using better physiological modeling, psychological modeling, modeling the actions of interveners, introducing Geographic Information Systems (GIS and open source models.

  15. Molecular Quantum Similarity Measures from Fermi hole Densities: Modeling Hammett Sigma Constants

    Czech Academy of Sciences Publication Activity Database

    Girónes, X.; Ponec, Robert

    2006-01-01

    Roč. 46, č. 3 (2006), s. 1388-1393 ISSN 1549-9596 Grant - others:SMCT(ES) SAF2000/0223/C03/01 Institutional research plan: CEZ:AV0Z40720504 Keywords : molecula quantum similarity measures * fermi hole densities * substituent effect Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.423, year: 2006

  16. Improving Earth/Prediction Models to Improve Network Processing

    Science.gov (United States)

    Wagner, G. S.

    2017-12-01

    The United States Atomic Energy Detection System (USAEDS) primaryseismic network consists of a relatively small number of arrays andthree-component stations. The relatively small number of stationsin the USAEDS primary network make it both necessary and feasibleto optimize both station and network processing.Station processing improvements include detector tuning effortsthat use Receiver Operator Characteristic (ROC) curves to helpjudiciously set acceptable Type 1 (false) vs. Type 2 (miss) errorrates. Other station processing improvements include the use ofempirical/historical observations and continuous background noisemeasurements to compute time-varying, maximum likelihood probabilityof detection thresholds.The USAEDS network processing software makes extensive use of theazimuth and slowness information provided by frequency-wavenumberanalysis at array sites, and polarization analysis at three-componentsites. Most of the improvements in USAEDS network processing aredue to improvements in the models used to predict azimuth, slowness,and probability of detection. Kriged travel-time, azimuth andslowness corrections-and associated uncertainties-are computedusing a ground truth database. Improvements in station processingand the use of improved models for azimuth, slowness, and probabilityof detection have led to significant improvements in USADES networkprocessing.

  17. Improved TOPSIS decision model for NPP emergencies

    International Nuclear Information System (INIS)

    Zhang Jin; Liu Feng; Huang Lian

    2011-01-01

    In this paper,an improved decision model is developed for its use as a tool to respond to emergencies at nuclear power plants. Given the complexity of multi-attribute emergency decision-making on nuclear accident, the improved TOPSIS method is used to build a decision-making model that integrates subjective weight and objective weight of each evaluation index. A comparison between the results of this new model and two traditional methods of fuzzy hierarchy analysis method and weighted analysis method demonstrates that the improved TOPSIS model has a better evaluation effect. (authors)

  18. Consequences of team charter quality: Teamwork mental model similarity and team viability in engineering design student teams

    Science.gov (United States)

    Conway Hughston, Veronica

    Since 1996 ABET has mandated that undergraduate engineering degree granting institutions focus on learning outcomes such as professional skills (i.e. solving unstructured problems and working in teams). As a result, engineering curricula were restructured to include team based learning---including team charters. Team charters were diffused into engineering education as one of many instructional activities to meet the ABET accreditation mandates. However, the implementation and execution of team charters into engineering team based classes has been inconsistent and accepted without empirical evidence of the consequences. The purpose of the current study was to investigate team effectiveness, operationalized as team viability, as an outcome of team charter implementation in an undergraduate engineering team based design course. Two research questions were the focus of the study: a) What is the relationship between team charter quality and viability in engineering student teams, and b) What is the relationship among team charter quality, teamwork mental model similarity, and viability in engineering student teams? Thirty-eight intact teams, 23 treatment and 15 comparison, participated in the investigation. Treatment teams attended a team charter lecture, and completed a team charter homework assignment. Each team charter was assessed and assigned a quality score. Comparison teams did not join the lecture, and were not asked to create a team charter. All teams completed each data collection phase: a) similarity rating pretest; b) similarity posttest; and c) team viability survey. Findings indicate that team viability was higher in teams that attended the lecture and completed the charter assignment. Teams with higher quality team charter scores reported higher levels of team viability than teams with lower quality charter scores. Lastly, no evidence was found to support teamwork mental model similarity as a partial mediator of the team charter quality on team viability

  19. Boson mapping of the shell model algebra obtained from a seniority-dictated similarity transformation

    International Nuclear Information System (INIS)

    Geyer, H.B.

    1986-01-01

    The qualitative ideas put forward by Geyer and Lee are given quantitative content by constructing a similarity transformation which reexpresses the Dyson boson images of the single-j shell fermion operators in terms of seniority bosons. It is shown that the results of Otsuka, Arima, and Iachello, or generalizations thereof which include g bosons or even bosons with J>4, can be obtained in an economic and transparent way without resorting to any comparison of matrix elements

  20. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    Science.gov (United States)

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. From epidemics to information propagation : Striking differences in structurally similar adaptive network models

    NARCIS (Netherlands)

    Trajanovski, S.; Guo, D.; Van Mieghem, P.F.A.

    2015-01-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways:

  2. Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments

    Science.gov (United States)

    Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Storrs, Katherine R.; Mur, Marieke

    2017-01-01

    Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features

  3. Study for the design method of multi-agent diagnostic system to improve diagnostic performance for similar abnormality

    International Nuclear Information System (INIS)

    Minowa, Hirotsugu; Gofuku, Akio

    2014-01-01

    Accidents on industrial plants cause large loss on human, economic, social credibility. In recent, studies of diagnostic methods using techniques of machine learning such as support vector machine is expected to detect the occurrence of abnormality in a plant early and correctly. There were reported that these diagnostic machines has high accuracy to diagnose the operating state of industrial plant under mono abnormality occurrence. But the each diagnostic machine on the multi-agent diagnostic system may misdiagnose similar abnormalities as a same abnormality if abnormalities to diagnose increases. That causes that a single diagnostic machine may show higher diagnostic performance than one of multi-agent diagnostic system because decision-making considering with misdiagnosis is difficult. Therefore, we study the design method for multi-agent diagnostic system to diagnose similar abnormality correctly. This method aimed to realize automatic generation of diagnostic system where the generation process and location of diagnostic machines are optimized to diagnose correctly the similar abnormalities which are evaluated from the similarity of process signals by statistical method. This paper explains our design method and reports the result evaluated our method applied to the process data of the fast-breeder reactor Monju

  4. Analyzing and leveraging self-similarity for variable resolution atmospheric models

    Science.gov (United States)

    O'Brien, Travis; Collins, William

    2015-04-01

    Variable resolution modeling techniques are rapidly becoming a popular strategy for achieving high resolution in a global atmospheric models without the computational cost of global high resolution. However, recent studies have demonstrated a variety of resolution-dependent, and seemingly artificial, features. We argue that the scaling properties of the atmosphere are key to understanding how the statistics of an atmospheric model should change with resolution. We provide two such examples. In the first example we show that the scaling properties of the cloud number distribution define how the ratio of resolved to unresolved clouds should increase with resolution. We show that the loss of resolved clouds, in the high resolution region of variable resolution simulations, with the Community Atmosphere Model version 4 (CAM4) is an artifact of the model's treatment of condensed water (this artifact is significantly reduced in CAM5). In the second example we show that the scaling properties of the horizontal velocity field, combined with the incompressibility assumption, necessarily result in an intensification of vertical mass flux as resolution increases. We show that such an increase is present in a wide variety of models, including CAM and the regional climate models of the ENSEMBLES intercomparision. We present theoretical arguments linking this increase to the intensification of precipitation with increasing resolution.

  5. Similarities between the Hubbard and Periodic Anderson Models at Finite Temperatures

    International Nuclear Information System (INIS)

    Held, K.; Huscroft, C.; Scalettar, R. T.; McMahan, A. K.

    2000-01-01

    The single band Hubbard and the two band periodic Anderson Hamiltonians have traditionally been applied to rather different physical problems--the Mott transition and itinerant magnetism, and Kondo singlet formation and scattering off localized magnetic states, respectively. In this paper, we compare the magnetic and charge correlations, and spectral functions, of the two systems. We show quantitatively that they exhibit remarkably similar behavior, including a nearly identical topology of the finite temperature phase diagrams at half filling. We address potential implications of this for theories of the rare earth ''volume collapse'' transition. (c) 2000 The American Physical Society

  6. Modeling the angular motion dynamics of spacecraft with a magnetic attitude control system based on experimental studies and dynamic similarity

    Science.gov (United States)

    Kulkov, V. M.; Medvedskii, A. L.; Terentyev, V. V.; Firsyuk, S. O.; Shemyakov, A. O.

    2017-12-01

    The problem of spacecraft attitude control using electromagnetic systems interacting with the Earth's magnetic field is considered. A set of dimensionless parameters has been formed to investigate the spacecraft orientation regimes based on dynamically similar models. The results of experimental studies of small spacecraft with a magnetic attitude control system can be extrapolated to the in-orbit spacecraft motion control regimes by using the methods of the dimensional and similarity theory.

  7. A general model for metabolic scaling in self-similar asymmetric networks.

    Directory of Open Access Journals (Sweden)

    Alexander Byers Brummer

    2017-03-01

    Full Text Available How a particular attribute of an organism changes or scales with its body size is known as an allometry. Biological allometries, such as metabolic scaling, have been hypothesized to result from selection to maximize how vascular networks fill space yet minimize internal transport distances and resistances. The West, Brown, Enquist (WBE model argues that these two principles (space-filling and energy minimization are (i general principles underlying the evolution of the diversity of biological networks across plants and animals and (ii can be used to predict how the resulting geometry of biological networks then governs their allometric scaling. Perhaps the most central biological allometry is how metabolic rate scales with body size. A core assumption of the WBE model is that networks are symmetric with respect to their geometric properties. That is, any two given branches within the same generation in the network are assumed to have identical lengths and radii. However, biological networks are rarely if ever symmetric. An open question is: Does incorporating asymmetric branching change or influence the predictions of the WBE model? We derive a general network model that relaxes the symmetric assumption and define two classes of asymmetrically bifurcating networks. We show that asymmetric branching can be incorporated into the WBE model. This asymmetric version of the WBE model results in several theoretical predictions for the structure, physiology, and metabolism of organisms, specifically in the case for the cardiovascular system. We show how network asymmetry can now be incorporated in the many allometric scaling relationships via total network volume. Most importantly, we show that the 3/4 metabolic scaling exponent from Kleiber's Law can still be attained within many asymmetric networks.

  8. Models for Evaluating and Improving Architecture Competence

    National Research Council Canada - National Science Library

    Bass, Len; Clements, Paul; Kazman, Rick; Klein, Mark

    2008-01-01

    ... producing high-quality architectures. This report lays out the basic concepts of software architecture competence and describes four models for explaining, measuring, and improving the architecture competence of an individual...

  9. What Models of Verbal Working Memory Can Learn from Phonological Theory: Decomposing the Phonological Similarity Effect

    Science.gov (United States)

    Schweppe, Judith; Grice, Martine; Rummer, Ralf

    2011-01-01

    Despite developments in phonology over the last few decades, models of verbal working memory make reference to phoneme-sized phonological units, rather than to the features of which they are composed. This study investigates the influence on short-term retention of such features by comparing the serial recall of lists of syllables with varying…

  10. Merging tree algorithm of growing voids in self-similar and CDM models

    NARCIS (Netherlands)

    Russell, Esra

    2013-01-01

    Observational studies show that voids are prominent features of the large-scale structure of the present-day Universe. Even though their emerging from the primordial density perturbations and evolutionary patterns differ from dark matter haloes, N-body simulations and theoretical models have shown

  11. Similar uptake profiles of microcystin-LR and -RR in an in vitro human intestinal model

    International Nuclear Information System (INIS)

    Zeller, P.; Clement, M.; Fessard, V.

    2011-01-01

    Highlights: → First description of in vitro cellular uptake of MCs into intestinal cells. → OATP 3A1 and OATP 4A1 are expressed in Caco-2 cell membranes. → MC-LR and MC-RR show similar uptake in Caco-2 cells. → MCs are probably excreted from Caco-2 cells by an active mechanism. -- Abstract: Microcystins (MCs) are cyclic hepatotoxins produced by various species of cyanobacteria. Their structure includes two variable amino acids (AA) leading to more than 80 MC variants. In this study, we focused on the most common variant, microcystin-LR (MC-LR), and microcystin-RR (MC-RR), a variant differing by only one AA. Despite their structural similarity, MC-LR elicits higher liver toxicity than MC-RR partly due to a discrepancy in their uptake by hepatic organic anion transporters (OATP 1B1 and 1B3). However, even though ingestion is the major pathway of human exposure to MCs, intestinal absorption of MCs has been poorly addressed. Consequently, we investigated the cellular uptake of the two MC variants in the human intestinal cell line Caco-2 by immunolocalization using an anti-MC antibody. Caco-2 cells were treated for 30 min to 24 h with several concentrations (1-50 μM) of both variants. We first confirmed the localization of OATP 3A1 and 4A1 at the cell membrane of Caco-2 cells. Our study also revealed a rapid uptake of both variants in less than 1 h. The uptake profiles of the two variants did not differ in our immunostaining study neither with respect to concentration nor the time of exposure. Furthermore, we have demonstrated for the first time the nuclear localization of MC-RR and confirmed that of MC-LR. Finally, our results suggest a facilitated uptake and an active excretion of MC-LR and MC-RR in Caco-2 cells. Further investigation on the role of OATP 3A1 and 4A1 in MC uptake should be useful to clarify the mechanism of intestinal absorption of MCs and contribute in risk assessment of cyanotoxin exposure.

  12. Model improvements to simulate charging in SEM

    Science.gov (United States)

    Arat, K. T.; Klimpel, T.; Hagen, C. W.

    2018-03-01

    Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.

  13. Improved models of dense anharmonic lattices

    Energy Technology Data Exchange (ETDEWEB)

    Rosenau, P., E-mail: rosenau@post.tau.ac.il; Zilburg, A.

    2017-01-15

    We present two improved quasi-continuous models of dense, strictly anharmonic chains. The direct expansion which includes the leading effect due to lattice dispersion, results in a Boussinesq-type PDE with a compacton as its basic solitary mode. Without increasing its complexity we improve the model by including additional terms in the expanded interparticle potential with the resulting compacton having a milder singularity at its edges. A particular care is applied to the Hertz potential due to its non-analyticity. Since, however, the PDEs of both the basic and the improved model are ill posed, they are unsuitable for a study of chains dynamics. Using the bond length as a state variable we manipulate its dispersion and derive a well posed fourth order PDE. - Highlights: • An improved PDE model of a Newtonian lattice renders compacton solutions. • Compactons are classical solutions of the improved model and hence amenable to standard analysis. • An alternative well posed model enables to study head on interactions of lattices' solitary waves. • Well posed modeling of Hertz potential.

  14. Efficient Adoption and Assessment of Multiple Process Improvement Reference Models

    Directory of Open Access Journals (Sweden)

    Simona Jeners

    2013-06-01

    Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.

  15. A Continuous Improvement Capital Funding Model.

    Science.gov (United States)

    Adams, Matt

    2001-01-01

    Describes a capital funding model that helps assess facility renewal needs in a way that minimizes resources while maximizing results. The article explains the sub-components of a continuous improvement capital funding model, including budgeting processes for finish renewal, building performance renewal, and critical outcome. (GR)

  16. Understanding catchment behaviour through model concept improvement

    NARCIS (Netherlands)

    Fenicia, F.

    2008-01-01

    This thesis describes an approach to model development based on the concept of iterative model improvement, which is a process where by trial and error different hypotheses of catchment behaviour are progressively tested, and the understanding of the system proceeds through a combined process of

  17. Improved ionic model of liquid uranium dioxide

    NARCIS (Netherlands)

    Gryaznov, [No Value; Iosilevski, [No Value; Yakub, E; Fortov, [No Value; Hyland, GJ; Ronchi, C

    The paper presents a model for liquid uranium dioxide, obtained by improving a simplified ionic model, previously adopted to describe the equation of state of this substance [1]. A "chemical picture" is used for liquid UO2 of stoichiometric and non-stoichiometric composition. Several ionic species

  18. More similar than you think: Frog metamorphosis as a model of human perinatal endocrinology.

    Science.gov (United States)

    Buchholz, Daniel R

    2015-12-15

    Hormonal control of development during the human perinatal period is critically important and complex with multiple hormones regulating fetal growth, brain development, and organ maturation in preparation for birth. Genetic and environmental perturbations of such hormonal control may cause irreversible morphological and physiological impairments and may also predispose individuals to diseases of adulthood, including diabetes and cardiovascular disease. Endocrine and molecular mechanisms that regulate perinatal development and that underlie the connections between early life events and adult diseases are not well elucidated. Such mechanisms are difficult to study in uterus-enclosed mammalian embryos because of confounding maternal effects. To elucidate mechanisms of developmental endocrinology in the perinatal period, Xenopus laevis the African clawed frog is a valuable vertebrate model. Frogs and humans have identical hormones which peak at birth and metamorphosis, have conserved hormone receptors and mechanisms of gene regulation, and have comparable roles for hormones in many target organs. Study of molecular and endocrine mechanisms of hormone-dependent development in frogs is advantageous because an extended free-living larval period followed by metamorphosis (1) is independent of maternal endocrine influence, (2) exhibits dramatic yet conserved developmental effects induced by thyroid and glucocorticoid hormones, and (3) begins at a developmental stage with naturally undetectable hormone levels, thereby facilitating endocrine manipulation and interpretation of results. This review highlights the utility of frog metamorphosis to elucidate molecular and endocrine actions, hormone interactions, and endocrine disruption, especially with respect to thyroid hormone. Knowledge from the frog model is expected to provide fundamental insights to aid medical understanding of endocrine disease, stress, and endocrine disruption affecting the perinatal period in humans

  19. A Unified Framework for Systematic Model Improvement

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2003-01-01

    A unified framework for improving the quality of continuous time models of dynamic systems based on experimental data is presented. The framework is based on an interplay between stochastic differential equation (SDE) modelling, statistical tests and multivariate nonparametric regression. This co......-batch bioreactor, where it is illustrated how an incorrectly modelled biomass growth rate can be pinpointed and an estimate provided of the functional relation needed to properly describe it....

  20. Can better modelling improve tokamak control?

    International Nuclear Information System (INIS)

    Lister, J.B.; Vyas, P.; Ward, D.J.; Albanese, R.; Ambrosino, G.; Ariola, M.; Villone, F.; Coutlis, A.; Limebeer, D.J.N.; Wainwright, J.P.

    1997-01-01

    The control of present day tokamaks usually relies upon primitive modelling and TCV is used to illustrate this. A counter example is provided by the successful implementation of high order SISO controllers on COMPASS-D. Suitable models of tokamaks are required to exploit the potential of modern control techniques. A physics based MIMO model of TCV is presented and validated with experimental closed loop responses. A system identified open loop model is also presented. An enhanced controller based on these models is designed and the performance improvements discussed. (author) 5 figs., 9 refs

  1. Automated pattern analysis in gesture research : similarity measuring in 3D motion capture models of communicative action

    NARCIS (Netherlands)

    Schueller, D.; Beecks, C.; Hassani, M.; Hinnell, J.; Brenger, B.; Seidl, T.; Mittelberg, I.

    2017-01-01

    The question of how to model similarity between gestures plays an important role in current studies in the domain of human communication. Most research into recurrent patterns in co-verbal gestures – manual communicative movements emerging spontaneously during conversation – is driven by qualitative

  2. SIMILARITIES BETWEEN THE KNOWLEDGE CREATION AND CONVERSION MODEL AND THE COMPETING VALUES FRAMEWORK: AN INTEGRATIVE APPROACH

    Directory of Open Access Journals (Sweden)

    PAULO COSTA

    2016-12-01

    Full Text Available ABSTRACT Contemporaneously, and with the successive paradigmatic revolutions inherent to management since the XVII century, we are witnessing a new era marked by the structural rupture in the way organizations are perceived. Market globalization, cemented by quick technological evolutions, associated with economic, cultural, political and social transformations characterize a reality where uncertainty is the only certainty for organizations and managers. Knowledge management has been interpreted by managers and academics as a viable alternative in a logic of creation and conversation of sustainable competitive advantages. However, there are several barriers to the implementation and development of knowledge management programs in organizations, with organizational culture being one of the most preponderant. In this sense, and in this article, we will analyze and compare The Knowledge Creation and Conversion Model proposed by Nonaka and Takeuchi (1995 and Quinn and Rohrbaugh's Competing Values Framework (1983, since both have convergent conceptual lines that can assist managers in different sectors to guide their organization in a perspective of productivity, quality and market competitiveness.

  3. Improving the physiological realism of experimental models.

    Science.gov (United States)

    Vinnakota, Kalyan C; Cha, Chae Y; Rorsman, Patrik; Balaban, Robert S; La Gerche, Andre; Wade-Martins, Richard; Beard, Daniel A; Jeneson, Jeroen A L

    2016-04-06

    The Virtual Physiological Human (VPH) project aims to develop integrative, explanatory and predictive computational models (C-Models) as numerical investigational tools to study disease, identify and design effective therapies and provide an in silico platform for drug screening. Ultimately, these models rely on the analysis and integration of experimental data. As such, the success of VPH depends on the availability of physiologically realistic experimental models (E-Models) of human organ function that can be parametrized to test the numerical models. Here, the current state of suitable E-models, ranging from in vitro non-human cell organelles to in vivo human organ systems, is discussed. Specifically, challenges and recent progress in improving the physiological realism of E-models that may benefit the VPH project are highlighted and discussed using examples from the field of research on cardiovascular disease, musculoskeletal disorders, diabetes and Parkinson's disease.

  4. Two Echelon Supply Chain Integrated Inventory Model for Similar Products: A Case Study

    Science.gov (United States)

    Parjane, Manoj Baburao; Dabade, Balaji Marutirao; Gulve, Milind Bhaskar

    2017-06-01

    The purpose of this paper is to develop a mathematical model towards minimization of total cost across echelons in a multi-product supply chain environment. The scenario under consideration is a two-echelon supply chain system with one manufacturer, one retailer and M products. The retailer faces independent Poisson demand for each product. The retailer and the manufacturer are closely coupled in the sense that the information about any depletion in the inventory of a product at a retailer's end is immediately available to the manufacturer. Further, stock-out is backordered at the retailer's end. Thus the costs incurred at the retailer's end are the holding costs and the backorder costs. The manufacturer has only one processor which is time shared among the M products. Production changeover from one product to another entails a fixed setup cost and a fixed set up time. Each unit of a product has a production time. Considering the cost components, and assuming transportation time and cost to be negligible, the objective of the study is to minimize the expected total cost considering both the manufacturer and retailer. In the process two aspects are to be defined. Firstly, every time a product is taken up for production, how much of it (production batch size, q) should be produced. Considering a large value of q favors the manufacturer while a small value of q suits the retailers. Secondly, for a given batch size q, at what level of retailer's inventory (production queuing point), the batch size S of a product be taken up for production by the manufacturer. A higher value of S incurs more holding cost whereas a lower value of S increases the chance of backorder. A tradeoff between the holding and backorder cost must be taken into consideration while choosing an optimal value of S. It may be noted that due to multiple products and single processor, a product `taken' up for production may not get the processor immediately, and may have to wait in a queue. The `S

  5. A touch-probe path generation method through similarity analysis between the feature vectors in new and old models

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Hye Sung; Lee, Jin Won; Yang, Jeong Sam [Dept. of Industrial Engineering, Ajou University, Suwon (Korea, Republic of)

    2016-10-15

    The On-machine measurement (OMM), which measures a work piece during or after the machining process in the machining center, has the advantage of measuring the work piece directly within the work space without moving it. However, the path generation procedure used to determine the measuring sequence and variables for the complex features of a target work piece has the limitation of requiring time-consuming tasks to generate the measuring points and mostly relies on the proficiency of the on-site engineer. In this study, we propose a touch-probe path generation method using similarity analysis between the feature vectors of three-dimensional (3-D) shapes for the OMM. For the similarity analysis between a new 3-D model and existing 3-D models, we extracted the feature vectors from models that can describe the characteristics of a geometric shape model; then, we applied those feature vectors to a geometric histogram that displays a probability distribution obtained by the similarity analysis algorithm. In addition, we developed a computer-aided inspection planning system that corrects non-applied measuring points that are caused by minute geometry differences between the two models and generates the final touch-probe path.

  6. Accuracy test for link prediction in terms of similarity index: The case of WS and BA models

    Science.gov (United States)

    Ahn, Min-Woo; Jung, Woo-Sung

    2015-07-01

    Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.

  7. A process improvement model for software verification and validation

    Science.gov (United States)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  8. An Improved Valuation Model for Technology Companies

    Directory of Open Access Journals (Sweden)

    Ako Doffou

    2015-06-01

    Full Text Available This paper estimates some of the parameters of the Schwartz and Moon (2001 model using cross-sectional data. Stochastic costs, future financing, capital expenditures and depreciation are taken into account. Some special conditions are also set: the speed of adjustment parameters are equal; the implied half-life of the sales growth process is linked to analyst forecasts; and the risk-adjustment parameter is inferred from the company’s observed stock price beta. The model is illustrated in the valuation of Google, Amazon, eBay, Facebook and Yahoo. The improved model is far superior to the Schwartz and Moon (2001 model.

  9. Improvement of MARS code reflood model

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Chung, Bub-Dong

    2011-01-01

    A specifically designed heat transfer model for the reflood process which normally occurs at low flow and low pressure was originally incorporated in the MARS code. The model is essentially identical to that of the RELAP5/MOD3.3 code. The model, however, is known to have under-estimated the peak cladding temperature (PCT) with earlier turn-over. In this study, the original MARS code reflood model is improved. Based on the extensive sensitivity studies for both hydraulic and wall heat transfer models, it is found that the dispersed flow film boiling (DFFB) wall heat transfer is the most influential process determining the PCT, whereas the interfacial drag model most affects the quenching time through the liquid carryover phenomenon. The model proposed by Bajorek and Young is incorporated for the DFFB wall heat transfer. Both space grid and droplet enhancement models are incorporated. Inverted annular film boiling (IAFB) is modeled by using the original PSI model of the code. The flow transition between the DFFB and IABF, is modeled using the TRACE code interpolation. A gas velocity threshold is also added to limit the top-down quenching effect. Assessment calculations are performed for the original and modified MARS codes for the Flecht-Seaset test and RBHT test. Improvements are observed in terms of the PCT and quenching time predictions in the Flecht-Seaset assessment. In case of the RBHT assessment, the improvement over the original MARS code is found marginal. A space grid effect, however, is clearly seen from the modified version of the MARS code. (author)

  10. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    Science.gov (United States)

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents

  11. Different relationships between temporal phylogenetic turnover and phylogenetic similarity and in two forests were detected by a new null model.

    Science.gov (United States)

    Huang, Jian-Xiong; Zhang, Jian; Shen, Yong; Lian, Ju-yu; Cao, Hong-lin; Ye, Wan-hui; Wu, Lin-fang; Bin, Yue

    2014-01-01

    Ecologists have been monitoring community dynamics with the purpose of understanding the rates and causes of community change. However, there is a lack of monitoring of community dynamics from the perspective of phylogeny. We attempted to understand temporal phylogenetic turnover in a 50 ha tropical forest (Barro Colorado Island, BCI) and a 20 ha subtropical forest (Dinghushan in southern China, DHS). To obtain temporal phylogenetic turnover under random conditions, two null models were used. The first shuffled names of species that are widely used in community phylogenetic analyses. The second simulated demographic processes with careful consideration on the variation in dispersal ability among species and the variations in mortality both among species and among size classes. With the two models, we tested the relationships between temporal phylogenetic turnover and phylogenetic similarity at different spatial scales in the two forests. Results were more consistent with previous findings using the second null model suggesting that the second null model is more appropriate for our purposes. With the second null model, a significantly positive relationship was detected between phylogenetic turnover and phylogenetic similarity in BCI at a 10 m×10 m scale, potentially indicating phylogenetic density dependence. This relationship in DHS was significantly negative at three of five spatial scales. This could indicate abiotic filtering processes for community assembly. Using variation partitioning, we found phylogenetic similarity contributed to variation in temporal phylogenetic turnover in the DHS plot but not in BCI plot. The mechanisms for community assembly in BCI and DHS vary from phylogenetic perspective. Only the second null model detected this difference indicating the importance of choosing a proper null model.

  12. Improved model for solar heating of buildings

    OpenAIRE

    Lie, Bernt

    2015-01-01

    A considerable future increase in the global energy use is expected, and the effects of energy conversion on the climate are already observed. Future energy conversion should thus be based on resources that have negligible climate effects; solar energy is perhaps the most important of such resources. The presented work builds on a previous complete model for solar heating of a house; here the aim to introduce ventilation heat recovery and improve on the hot water storage model. Ventilation he...

  13. The effects of gravity on human walking: a new test of the dynamic similarity hypothesis using a predictive model.

    Science.gov (United States)

    Raichlen, David A

    2008-09-01

    The dynamic similarity hypothesis (DSH) suggests that differences in animal locomotor biomechanics are due mostly to differences in size. According to the DSH, when the ratios of inertial to gravitational forces are equal between two animals that differ in size [e.g. at equal Froude numbers, where Froude = velocity2/(gravity x hip height)], their movements can be made similar by multiplying all time durations by one constant, all forces by a second constant and all linear distances by a third constant. The DSH has been generally supported by numerous comparative studies showing that as inertial forces differ (i.e. differences in the centripetal force acting on the animal due to variation in hip heights), animals walk with dynamic similarity. However, humans walking in simulated reduced gravity do not walk with dynamically similar kinematics. The simulated gravity experiments did not completely account for the effects of gravity on all body segments, and the importance of gravity in the DSH requires further examination. This study uses a kinematic model to predict the effects of gravity on human locomotion, taking into account both the effects of gravitational forces on the upper body and on the limbs. Results show that dynamic similarity is maintained in altered gravitational environments. Thus, the DSH does account for differences in the inertial forces governing locomotion (e.g. differences in hip height) as well as differences in the gravitational forces governing locomotion.

  14. Improved SPICE electrical model of silicon photomultipliers

    Energy Technology Data Exchange (ETDEWEB)

    Marano, D., E-mail: davide.marano@oact.inaf.it [INAF, Osservatorio Astrofisico di Catania, Via S. Sofia 78, I-95123 Catania (Italy); Bonanno, G.; Belluso, M.; Billotta, S.; Grillo, A.; Garozzo, S.; Romeo, G. [INAF, Osservatorio Astrofisico di Catania, Via S. Sofia 78, I-95123 Catania (Italy); Catalano, O.; La Rosa, G.; Sottile, G.; Impiombato, D.; Giarrusso, S. [INAF, Istituto di Astrofisica Spaziale e Fisica Cosmica di Palermo, Via U. La Malfa 153, I-90146 Palermo (Italy)

    2013-10-21

    The present work introduces an improved SPICE equivalent electrical model of silicon photomultiplier (SiPM) detectors, in order to simulate and predict their transient response to avalanche triggering events. In particular, the developed circuit model provides a careful investigation of the magnitude and timing of the read-out signals and can therefore be exploited to perform reliable circuit-level simulations. The adopted modeling approach is strictly related to the physics of each basic microcell constituting the SiPM device, and allows the avalanche timing as well as the photodiode current and voltage to be accurately simulated. Predictive capabilities of the proposed model are demonstrated by means of experimental measurements on a real SiPM detector. Simulated and measured pulses are found to be in good agreement with the expected results. -- Highlights: • An improved SPICE electrical model of silicon photomultipliers is proposed. • The developed model provides a truthful representation of the physics of the device. • An accurate charge collection as a function of the overvoltage is achieved. • The adopted electrical model allows reliable circuit-level simulations to be performed. • Predictive capabilities of the adopted model are experimentally demonstrated.

  15. Improving Representational Competence with Concrete Models

    Science.gov (United States)

    Stieff, Mike; Scopelitis, Stephanie; Lira, Matthew E.; DeSutter, Dane

    2016-01-01

    Representational competence is a primary contributor to student learning in science, technology, engineering, and math (STEM) disciplines and an optimal target for instruction at all educational levels. We describe the design and implementation of a learning activity that uses concrete models to improve students' representational competence and…

  16. Improved transition models for cepstral trajectories

    CSIR Research Space (South Africa)

    Badenhorst, J

    2012-11-01

    Full Text Available We improve on a piece-wise linear model of the trajectories of Mel Frequency Cepstral Coefficients, which are commonly used as features in Automatic Speech Recognition. For this purpose, we have created a very clean single-speaker corpus, which...

  17. School Improvement Model to Foster Student Learning

    Science.gov (United States)

    Rulloda, Rudolfo Barcena

    2011-01-01

    Many classroom teachers are still using the traditional teaching methods. The traditional teaching methods are one-way learning process, where teachers would introduce subject contents such as language arts, English, mathematics, science, and reading separately. However, the school improvement model takes into account that all students have…

  18. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2018-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  19. Improved double Q2 rescaling model

    International Nuclear Information System (INIS)

    Gao Yonghua

    2001-01-01

    The authors present an improved double Q 2 rescaling model. Based on this condition of the nuclear momentum conservation, the authors have found a Q 2 rescaling parameters' formula of the model, where authors have established the connection between the Q 2 rescaling parameter ζ i (i = v, s, g) and the mean binding energy in nucleus. By using this model, the authors coned explain the experimental data of the EMC effect in the whole x region, the nuclear Drell-Yan process and J/Ψ photoproduction process

  20. Applications of Analytical Self-Similar Solutions of Reynolds-Averaged Models for Instability-Induced Turbulent Mixing

    Science.gov (United States)

    Hartland, Tucker; Schilling, Oleg

    2017-11-01

    Analytical self-similar solutions to several families of single- and two-scale, eddy viscosity and Reynolds stress turbulence models are presented for Rayleigh-Taylor, Richtmyer-Meshkov, and Kelvin-Helmholtz instability-induced turbulent mixing. The use of algebraic relationships between model coefficients and physical observables (e.g., experimental growth rates) following from the self-similar solutions to calibrate a member of a given family of turbulence models is shown. It is demonstrated numerically that the algebraic relations accurately predict the value and variation of physical outputs of a Reynolds-averaged simulation in flow regimes that are consistent with the simplifying assumptions used to derive the solutions. The use of experimental and numerical simulation data on Reynolds stress anisotropy ratios to calibrate a Reynolds stress model is briefly illustrated. The implications of the analytical solutions for future Reynolds-averaged modeling of hydrodynamic instability-induced mixing are briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  1. A study of the predictive model on the user reaction time using the information amount and similarity

    International Nuclear Information System (INIS)

    Lee, Sungjin; Heo, Gyunyoung; Chang, S.H.

    2004-01-01

    Human operations through a user interface are divided into two types. The one is the single operation that is performed on a static interface. The other is the sequential operation that achieves a goal by handling several displays through operator's navigation in the crt-based console. Sequential operation has similar meaning with continuous task. Most operations in recently developed computer applications correspond to the sequential operation, and the single operation can be considered as a part of the sequential operation. In the area of HCI (human computer interaction) evaluation, the Hick-Hyman law counts as the most powerful theory. The most important factor in the equation of Hick-Hyman law about choice reaction time is the quantified amount of information conveyed by a statement, stimulus, or event. Generally, we can expect that if there are some similarities between a series of interfaces, human operator is able to use his attention resource effectively. That is the performance of human operator is increased by the similarity. The similarity may be able to affect the allocation of attention resource based on separate STSS (short-term sensory store) and long-term memory. There are theories related with this concept, which are task switching paradigm and the law of practice. However, it is not easy to explain the human operator performance with only the similarity or the information amount. There are few theories to explain the performance with the combination of the similarity and the information amount. The objective of this paper is to purpose and validate the quantitative and predictive model on the user reaction time in CRT-based displays. Another objective is to validate various theories related with human cognition and perception, which are Hick-Hyman law and the law of practice as representative theories. (author)

  2. Improved Inference of Heteroscedastic Fixed Effects Models

    Directory of Open Access Journals (Sweden)

    Afshan Saeed

    2016-12-01

    Full Text Available Heteroscedasticity is a stern problem that distorts estimation and testing of panel data model (PDM. Arellano (1987 proposed the White (1980 estimator for PDM with heteroscedastic errors but it provides erroneous inference for the data sets including high leverage points. In this paper, our attempt is to improve heteroscedastic consistent covariance matrix estimator (HCCME for panel dataset with high leverage points. To draw robust inference for the PDM, our focus is to improve kernel bootstrap estimators, proposed by Racine and MacKinnon (2007. The Monte Carlo scheme is used for assertion of the results.

  3. An improved interfacial bonding model for material interface modeling

    Science.gov (United States)

    Lin, Liqiang; Wang, Xiaodu; Zeng, Xiaowei

    2016-01-01

    An improved interfacial bonding model was proposed from potential function point of view to investigate interfacial interactions in polycrystalline materials. It characterizes both attractive and repulsive interfacial interactions and can be applied to model different material interfaces. The path dependence of work-of-separation study indicates that the transformation of separation work is smooth in normal and tangential direction and the proposed model guarantees the consistency of the cohesive constitutive model. The improved interfacial bonding model was verified through a simple compression test in a standard hexagonal structure. The error between analytical solutions and numerical results from the proposed model is reasonable in linear elastic region. Ultimately, we investigated the mechanical behavior of extrafibrillar matrix in bone and the simulation results agreed well with experimental observations of bone fracture. PMID:28584343

  4. General relativistic self-similar waves that induce an anomalous acceleration into the standard model of cosmology

    CERN Document Server

    Smoller, Joel

    2012-01-01

    We prove that the Einstein equations in Standard Schwarzschild Coordinates close to form a system of three ordinary differential equations for a family of spherically symmetric, self-similar expansion waves, and the critical ($k=0$) Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology (FRW), is embedded as a single point in this family. Removing a scaling law and imposing regularity at the center, we prove that the family reduces to an implicitly defined one parameter family of distinct spacetimes determined by the value of a new {\\it acceleration parameter} $a$, such that $a=1$ corresponds to FRW. We prove that all self-similar spacetimes in the family are distinct from the non-critical $k\

  5. Visual Similarity of Words Alone Can Modulate Hemispheric Lateralization in Visual Word Recognition: Evidence From Modeling Chinese Character Recognition.

    Science.gov (United States)

    Hsiao, Janet H; Cheung, Kit

    2016-03-01

    In Chinese orthography, the most common character structure consists of a semantic radical on the left and a phonetic radical on the right (SP characters); the minority, opposite arrangement also exists (PS characters). Recent studies showed that SP character processing is more left hemisphere (LH) lateralized than PS character processing. Nevertheless, it remains unclear whether this is due to phonetic radical position or character type frequency. Through computational modeling with artificial lexicons, in which we implement a theory of hemispheric asymmetry in perception but do not assume phonological processing being LH lateralized, we show that the difference in character type frequency alone is sufficient to exhibit the effect that the dominant type has a stronger LH lateralization than the minority type. This effect is due to higher visual similarity among characters in the dominant type than the minority type, demonstrating the modulation of visual similarity of words on hemispheric lateralization. Copyright © 2015 Cognitive Science Society, Inc.

  6. Improved Collaborative Filtering Algorithm using Topic Model

    Directory of Open Access Journals (Sweden)

    Liu Na

    2016-01-01

    Full Text Available Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users or items is calculated based on rating mostly, without considering explicit properties of users or items involved. In this paper, we proposed collaborative filtering algorithm using topic model. We describe user-item matrix as document-word matrix and user are represented as random mixtures over item, each item is characterized by a distribution over users. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on Movie Lens data sets.

  7. Driving clinical study efficiency by using a productivity breakdown model: comparative evaluation of a global clinical study and a similar Japanese study.

    Science.gov (United States)

    Takahashi, K; Sengoku, S; Kimura, H

    2011-02-01

    A fundamental management imperative of pharmaceutical companies is to contain surging costs of developing and launching drugs globally. Clinical studies are a research and development (R&D) cost driver. The objective of this study was to develop a productivity breakdown model, or a key performance indicator (KPI) tree, for an entire clinical study and to use it to compare a global clinical study with a similar Japanese study. We, thereby, hope to identify means of improving study productivity. We developed the new clinical study productivity breakdown model, covering operational aspects and cost factors. Elements for improving clinical study productivity were assessed from a management viewpoint by comparing empirical tracking data from a global clinical study with a Japanese study with similar protocols. The following unique and material differences, beyond simple international difference in cost of living, that could affect the efficiency of future clinical trials were identified: (i) more frequent site visits in the Japanese study, (ii) head counts at the Japanese study sites more than double those of the global study and (iii) a shorter enrollment time window of about a third that of the global study at the Japanese study sites. We identified major differences in the performance of the two studies. These findings demonstrate the potential of the KPI tree for improving clinical study productivity. Trade-offs, such as those between reduction in head count at study sites and expansion of the enrollment time window, must be considered carefully. © 2010 Blackwell Publishing Ltd.

  8. A improved Network Security Situation Awareness Model

    Directory of Open Access Journals (Sweden)

    Li Fangwei

    2015-08-01

    Full Text Available In order to reflect the situation of network security assessment performance fully and accurately, a new network security situation awareness model based on information fusion was proposed. Network security situation is the result of fusion three aspects evaluation. In terms of attack, to improve the accuracy of evaluation, a situation assessment method of DDoS attack based on the information of data packet was proposed. In terms of vulnerability, a improved Common Vulnerability Scoring System (CVSS was raised and maked the assessment more comprehensive. In terms of node weights, the method of calculating the combined weights and optimizing the result by Sequence Quadratic Program (SQP algorithm which reduced the uncertainty of fusion was raised. To verify the validity and necessity of the method, a testing platform was built and used to test through evaluating 2000 DAPRA data sets. Experiments show that the method can improve the accuracy of evaluation results.

  9. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  10. An improved gravity model for Mars: Goddard Mars Model 1

    Science.gov (United States)

    Smith, D. E.; Lerch, F. J.; Nerem, R. S.; Zuber, M. T.; Patel, G. B.; Fricke, S. K.; Lemoine, F. G.

    1993-01-01

    Doppler tracking data of three orbiting spacecraft have been reanalyzed to develop a new gravitational field model for the planet Mars, Goddard Mars Model 1 (GMM-1). This model employs nearly all available data, consisting of approximately 1100 days of S band tracking data collected by NASA's Deep Space Network from the Mariner 9 and Viking 1 and Viking 2 spacecraft, in seven different orbits, between 1971 and 1979. GMM-1 is complete to spherical harmonic degree and order 50, which corresponds to a half-wavelength spatial resolution of 200-300 km where the data permit. GMM-1 represents satellite orbits with considerably better accuracy than previous Mars gravity models and shows greater resolution of identifiable geological structures. The notable improvement in GMM-1 over previous models is a consequence of several factors: improved computational capabilities, the use of otpimum weighting and least squares collocation solution techniques which stabilized the behavior of the solution at high degree and order, and the use of longer satellite arcs than employed in previous solutions that were made possible by improved force and measurement models. The inclusion of X band tracking data from the 379-km altitude, nnear-polar orbiting Mars Observer spacecraft should provide a significant improvement over GMM-1, particularly at high latitudes where current data poorly resolve the gravitational signature of the planet.

  11. Mathematical evaluation of similarity factor using various weighing approaches on aceclofenac marketed formulations by model-independent method.

    Science.gov (United States)

    Soni, T G; Desai, J U; Nagda, C D; Gandhi, T R; Chotai, N P

    2008-01-01

    The US Food and Drug Administration's (FDA's) guidance for industry on dissolution testing of immediate-release solid oral dosage forms describes that drug dissolution may be the rate limiting step for drug absorption in the case of low solubility/high permeability drugs (BCS class II drugs). US FDA Guidance describes the model-independent mathematical approach proposed by Moore and Flanner for calculating a similarity factor (f2) of dissolution across a suitable time interval. In the present study, the similarity factor was calculated on dissolution data of two marketed aceclofenac tablets (a BCS class II drug) using various weighing approaches proposed by Gohel et al. The proposed approaches were compared with a conventional approach (W = 1). On the basis of consideration of variability, preference is given in the order of approach 3 > approach 2 > approach 1 as approach 3 considers batch-to-batch as well as within-samples variability and shows best similarity profile. Approach 2 considers batch-to batch variability with higher specificity than approach 1.

  12. Improving Bioenergy Crops through Dynamic Metabolic Modeling

    Directory of Open Access Journals (Sweden)

    Mojdeh Faraji

    2017-10-01

    Full Text Available Enormous advances in genetics and metabolic engineering have made it possible, in principle, to create new plants and crops with improved yield through targeted molecular alterations. However, while the potential is beyond doubt, the actual implementation of envisioned new strains is often difficult, due to the diverse and complex nature of plants. Indeed, the intrinsic complexity of plants makes intuitive predictions difficult and often unreliable. The hope for overcoming this challenge is that methods of data mining and computational systems biology may become powerful enough that they could serve as beneficial tools for guiding future experimentation. In the first part of this article, we review the complexities of plants, as well as some of the mathematical and computational methods that have been used in the recent past to deepen our understanding of crops and their potential yield improvements. In the second part, we present a specific case study that indicates how robust models may be employed for crop improvements. This case study focuses on the biosynthesis of lignin in switchgrass (Panicum virgatum. Switchgrass is considered one of the most promising candidates for the second generation of bioenergy production, which does not use edible plant parts. Lignin is important in this context, because it impedes the use of cellulose in such inedible plant materials. The dynamic model offers a platform for investigating the pathway behavior in transgenic lines. In particular, it allows predictions of lignin content and composition in numerous genetic perturbation scenarios.

  13. Improving PSA quality of KSNP PSA model

    International Nuclear Information System (INIS)

    Yang, Joon Eon; Ha, Jae Joo

    2004-01-01

    In the RIR (Risk-informed Regulation), PSA (Probabilistic Safety Assessment) plays a major role because it provides overall risk insights for the regulatory body and utility. Therefore, the scope, the level of details and the technical adequacy of PSA, i.e. the quality of PSA is to be ensured for the successful RIR. To improve the quality of Korean PSA, we evaluate the quality of the KSNP (Korean Standard Nuclear Power Plant) internal full-power PSA model based on the 'ASME PRA Standard' and the 'NEI PRA Peer Review Process Guidance.' As a working group, PSA experts of the regulatory body and industry also participated in the evaluation process. It is finally judged that the overall quality of the KSNP PSA is between the ASME Standard Capability Category I and II. We also derive some items to be improved for upgrading the quality of the PSA up to the ASME Standard Capability Category II. In this paper, we show the result of quality evaluation, and the activities to improve the quality of the KSNP PSA model

  14. Robust Multiscale Modelling Of Two-Phase Steels On Heterogeneous Hardware Infrastructures By Using Statistically Similar Representative Volume Element

    Directory of Open Access Journals (Sweden)

    Rauch Ł.

    2015-09-01

    Full Text Available The coupled finite element multiscale simulations (FE2 require costly numerical procedures in both macro and micro scales. Attempts to improve numerical efficiency are focused mainly on two areas of development, i.e. parallelization/distribution of numerical procedures and simplification of virtual material representation. One of the representatives of both mentioned areas is the idea of Statistically Similar Representative Volume Element (SSRVE. It aims at the reduction of the number of finite elements in micro scale as well as at parallelization of the calculations in micro scale which can be performed without barriers. The simplification of computational domain is realized by transformation of sophisticated images of material microstructure into artificially created simple objects being characterized by similar features as their original equivalents. In existing solutions for two-phase steels SSRVE is created on the basis of the analysis of shape coefficients of hard phase in real microstructure and searching for a representative simple structure with similar shape coefficients. Optimization techniques were used to solve this task. In the present paper local strains and stresses are added to the cost function in optimization. Various forms of the objective function composed of different elements were investigated and used in the optimization procedure for the creation of the final SSRVE. The results are compared as far as the efficiency of the procedure and uniqueness of the solution are considered. The best objective function composed of shape coefficients, as well as of strains and stresses, was proposed. Examples of SSRVEs determined for the investigated two-phase steel using that objective function are demonstrated in the paper. Each step of SSRVE creation is investigated from computational efficiency point of view. The proposition of implementation of the whole computational procedure on modern High Performance Computing (HPC

  15. Improving surgeon utilization in an orthopedic department using simulation modeling

    Directory of Open Access Journals (Sweden)

    Simwita YW

    2016-10-01

    Full Text Available Yusta W Simwita, Berit I Helgheim Department of Logistics, Molde University College, Molde, Norway Purpose: Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time.Methods: The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization.Results: The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services.Conclusion: This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. Keywords: waiting time, patient, health care process

  16. An Improved MUSIC Model for Gibbsite Surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott C.; Bickmore, Barry R.; Tadanier, Christopher J.; Rosso, Kevin M.

    2004-06-01

    Here we use gibbsite as a model system with which to test a recently published, bond-valence method for predicting intrinsic pKa values for surface functional groups on oxides. At issue is whether the method is adequate when valence parameters for the functional groups are derived from ab initio structure optimization of surfaces terminated by vacuum. If not, ab initio molecular dynamics (AIMD) simulations of solvated surfaces (which are much more computationally expensive) will have to be used. To do this, we had to evaluate extant gibbsite potentiometric titration data that where some estimate of edge and basal surface area was available. Applying BET and recently developed atomic force microscopy methods, we found that most of these data sets were flawed, in that their surface area estimates were probably wrong. Similarly, there may have been problems with many of the titration procedures. However, one data set was adequate on both counts, and we applied our method of surface pKa int prediction to fitting a MUSIC model to this data with considerable success—several features of the titration data were predicted well. However, the model fit was certainly not perfect, and we experienced some difficulties optimizing highly charged, vacuum-terminated surfaces. Therefore, we conclude that we probably need to do AIMD simulations of solvated surfaces to adequately predict intrinsic pKa values for surface functional groups.

  17. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  18. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  19. Improving carbon model phenology using data assimilation

    Science.gov (United States)

    Exrayat, Jean-François; Smallman, T. Luke; Bloom, A. Anthony; Williams, Mathew

    2015-04-01

    drivers. DALEC2-GSI showed a more realistic response to climate variability and fire disturbance than DALEC2. DALEC2-GSI more accurately reproduced the assimilated global LAI time series, particularly in areas with high levels of disturbance. This result is supported by more ecologically consistent trait combinations generated by the DALEC2-GSI calibration. In addition, using DALEC2-GSI we are able to map global information on ecosystem traits such as drought tolerance and adaptation to repeated fire disturbance. This demonstrates that utilizing data assimilation provides a useful means of improving the representation of processes within models.

  20. Simplification and Shift in Cognition of Political Difference: Applying the Geometric Modeling to the Analysis of Semantic Similarity Judgment

    Science.gov (United States)

    Kato, Junko; Okada, Kensuke

    2011-01-01

    Perceiving differences by means of spatial analogies is intrinsic to human cognition. Multi-dimensional scaling (MDS) analysis based on Minkowski geometry has been used primarily on data on sensory similarity judgments, leaving judgments on abstractive differences unanalyzed. Indeed, analysts have failed to find appropriate experimental or real-life data in this regard. Our MDS analysis used survey data on political scientists' judgments of the similarities and differences between political positions expressed in terms of distance. Both distance smoothing and majorization techniques were applied to a three-way dataset of similarity judgments provided by at least seven experts on at least five parties' positions on at least seven policies (i.e., originally yielding 245 dimensions) to substantially reduce the risk of local minima. The analysis found two dimensions, which were sufficient for mapping differences, and fit the city-block dimensions better than the Euclidean metric in all datasets obtained from 13 countries. Most city-block dimensions were highly correlated with the simplified criterion (i.e., the left–right ideology) for differences that are actually used in real politics. The isometry of the city-block and dominance metrics in two-dimensional space carries further implications. More specifically, individuals may pay attention to two dimensions (if represented in the city-block metric) or focus on a single dimension (if represented in the dominance metric) when judging differences between the same objects. Switching between metrics may be expected to occur during cognitive processing as frequently as the apparent discontinuities and shifts in human attention that may underlie changing judgments in real situations occur. Consequently, the result has extended strong support for the validity of the geometric models to represent an important social cognition, i.e., the one of political differences, which is deeply rooted in human nature. PMID:21673959

  1. Similar Improvements in Patient-Reported Outcomes Among Rheumatoid Arthritis Patients Treated with Two Different Doses of Methotrexate in Combination with Adalimumab: Results From the MUSICA Trial.

    Science.gov (United States)

    Kaeley, Gurjit S; MacCarter, Daryl K; Goyal, Janak R; Liu, Shufang; Chen, Kun; Griffith, Jennifer; Kupper, Hartmut; Garg, Vishvas; Kalabic, Jasmina

    2018-06-01

    In patients with rheumatoid arthritis (RA), combination treatment with methotrexate (MTX) and adalimumab is more effective than MTX monotherapy. From the patients' perspective, the impact of reduced MTX doses upon initiating adalimumab is not known. The objective was to evaluate the effects of low and high MTX doses in combination with adalimumab initiation on patient-reported outcomes (PROs), in MTX-inadequate responders (MTX-IR) with moderate-to-severe RA. MUSICA was a randomized, double-blind, controlled trial evaluating the efficacy of 7.5 or 20 mg/week MTX, in combination with adalimumab for 24 weeks in MTX-IR RA patients receiving prior MTX ≥ 15 mg/week for ≥ 12 weeks. PROs were recorded at each visit, including physical function, health-related quality-of-life, work productivity, quality-of-sleep, satisfaction with treatment medication, sexual impairment due to RA, patient global assessment of disease activity (PGA), and patient pain. Last observation carried forward was used to account for missing values. At baseline, patients in both MTX dosage groups had similar demographics, disease characteristics, and PRO scores. Overall, initiation of adalimumab led to significant improvements from baseline in the PROs assessed for both MTX dosage groups. Improvements in presenteeism from baseline were strongly correlated with corresponding improvements in SF-36 (vitality), pain, and physical function. Physical and mental well-being had a good correlation with improvement in sleep. Overall, improvements in disease activity from baseline were correlated with improvements in several PROs. The addition of adalimumab to MTX in MTX-IR patients with moderate-to-severe RA led to improvements in physical function, quality-of-life, work productivity, quality of sleep, satisfaction with treatment medication, and sexual impairment due to RA, regardless of the concomitant MTX dosage. AbbVie. Clinicaltrials.gov identifier, NCT01185288.

  2. Improving Marine Ecosystem Models with Biochemical Tracers

    Science.gov (United States)

    Pethybridge, Heidi R.; Choy, C. Anela; Polovina, Jeffrey J.; Fulton, Elizabeth A.

    2018-01-01

    Empirical data on food web dynamics and predator-prey interactions underpin ecosystem models, which are increasingly used to support strategic management of marine resources. These data have traditionally derived from stomach content analysis, but new and complementary forms of ecological data are increasingly available from biochemical tracer techniques. Extensive opportunities exist to improve the empirical robustness of ecosystem models through the incorporation of biochemical tracer data and derived indices, an area that is rapidly expanding because of advances in analytical developments and sophisticated statistical techniques. Here, we explore the trophic information required by ecosystem model frameworks (species, individual, and size based) and match them to the most commonly used biochemical tracers (bulk tissue and compound-specific stable isotopes, fatty acids, and trace elements). Key quantitative parameters derived from biochemical tracers include estimates of diet composition, niche width, and trophic position. Biochemical tracers also provide powerful insight into the spatial and temporal variability of food web structure and the characterization of dominant basal and microbial food web groups. A major challenge in incorporating biochemical tracer data into ecosystem models is scale and data type mismatches, which can be overcome with greater knowledge exchange and numerical approaches that transform, integrate, and visualize data.

  3. Modeling soil water content for vegetation modeling improvement

    Science.gov (United States)

    Cianfrani, Carmen; Buri, Aline; Zingg, Barbara; Vittoz, Pascal; Verrecchia, Eric; Guisan, Antoine

    2016-04-01

    Soil water content (SWC) is known to be important for plants as it affects the physiological processes regulating plant growth. Therefore, SWC controls plant distribution over the Earth surface, ranging from deserts and grassland to rain forests. Unfortunately, only a few data on SWC are available as its measurement is very time consuming and costly and needs specific laboratory tools. The scarcity of SWC measurements in geographic space makes it difficult to model and spatially project SWC over larger areas. In particular, it prevents its inclusion in plant species distribution model (SDMs) as predictor. The aims of this study were, first, to test a new methodology allowing problems of the scarcity of SWC measurements to be overpassed and second, to model and spatially project SWC in order to improve plant SDMs with the inclusion of SWC parameter. The study was developed in four steps. First, SWC was modeled by measuring it at 10 different pressures (expressed in pF and ranging from pF=0 to pF=4.2). The different pF represent different degrees of soil water availability for plants. An ensemble of bivariate models was built to overpass the problem of having only a few SWC measurements (n = 24) but several predictors to include in the model. Soil texture (clay, silt, sand), organic matter (OM), topographic variables (elevation, aspect, convexity), climatic variables (precipitation) and hydrological variables (river distance, NDWI) were used as predictors. Weighted ensemble models were built using only bivariate models with adjusted-R2 > 0.5 for each SWC at different pF. The second step consisted in running plant SDMs including modeled SWC jointly with the conventional topo-climatic variable used for plant SDMs. Third, SDMs were only run using the conventional topo-climatic variables. Finally, comparing the models obtained in the second and third steps allowed assessing the additional predictive power of SWC in plant SDMs. SWC ensemble models remained very good, with

  4. On Improving 4-km Mesoscale Model Simulations

    Science.gov (United States)

    Deng, Aijun; Stauffer, David R.

    2006-03-01

    A previous study showed that use of analysis-nudging four-dimensional data assimilation (FDDA) and improved physics in the fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) produced the best overall performance on a 12-km-domain simulation, based on the 18 19 September 1983 Cross-Appalachian Tracer Experiment (CAPTEX) case. However, reducing the simulated grid length to 4 km had detrimental effects. The primary cause was likely the explicit representation of convection accompanying a cold-frontal system. Because no convective parameterization scheme (CPS) was used, the convective updrafts were forced on coarser-than-realistic scales, and the rainfall and the atmospheric response to the convection were too strong. The evaporative cooling and downdrafts were too vigorous, causing widespread disruption of the low-level winds and spurious advection of the simulated tracer. In this study, a series of experiments was designed to address this general problem involving 4-km model precipitation and gridpoint storms and associated model sensitivities to the use of FDDA, planetary boundary layer (PBL) turbulence physics, grid-explicit microphysics, a CPS, and enhanced horizontal diffusion. Some of the conclusions include the following: 1) Enhanced parameterized vertical mixing in the turbulent kinetic energy (TKE) turbulence scheme has shown marked improvements in the simulated fields. 2) Use of a CPS on the 4-km grid improved the precipitation and low-level wind results. 3) Use of the Hong and Pan Medium-Range Forecast PBL scheme showed larger model errors within the PBL and a clear tendency to predict much deeper PBL heights than the TKE scheme. 4) Combining observation-nudging FDDA with a CPS produced the best overall simulations. 5) Finer horizontal resolution does not always produce better simulations, especially in convectively unstable environments, and a new CPS suitable for 4-km resolution is needed. 6

  5. Improved choked flow model for MARS code

    International Nuclear Information System (INIS)

    Chung, Moon Sun; Lee, Won Jae; Ha, Kwi Seok; Hwang, Moon Kyu

    2002-01-01

    Choked flow calculation is improved by using a new sound speed criterion for bubbly flow that is derived by the characteristic analysis of hyperbolic two-fluid model. This model was based on the notion of surface tension for the interfacial pressure jump terms in the momentum equations. Real eigenvalues obtained as the closed-form solution of characteristic polynomial represent the sound speed in the bubbly flow regime that agrees well with the existing experimental data. The present sound speed shows more reasonable result in the extreme case than the Nguyens did. The present choked flow criterion derived by the present sound speed is employed in the MARS code and assessed by using the Marviken choked flow tests. The assessment results without any adjustment made by some discharge coefficients demonstrate more accurate predictions of choked flow rate in the bubbly flow regime than those of the earlier choked flow calculations. By calculating the Typical PWR (SBLOCA) problem, we make sure that the present model can reproduce the reasonable transients of integral reactor system

  6. Similarities and Improvements of GPM Dual-Frequency Precipitation Radar (DPR upon TRMM Precipitation Radar (PR in Global Precipitation Rate Estimation, Type Classification and Vertical Profiling

    Directory of Open Access Journals (Sweden)

    Jinyu Gao

    2017-11-01

    Full Text Available Spaceborne precipitation radars are powerful tools used to acquire adequate and high-quality precipitation estimates with high spatial resolution for a variety of applications in hydrological research. The Global Precipitation Measurement (GPM mission, which deployed the first spaceborne Ka- and Ku-dual frequency radar (DPR, was launched in February 2014 as the upgraded successor of the Tropical Rainfall Measuring Mission (TRMM. This study matches the swath data of TRMM PR and GPM DPR Level 2 products during their overlapping periods at the global scale to investigate their similarities and DPR’s improvements concerning precipitation amount estimation and type classification of GPM DPR over TRMM PR. Results show that PR and DPR agree very well with each other in the global distribution of precipitation, while DPR improves the detectability of precipitation events significantly, particularly for light precipitation. The occurrences of total precipitation and the light precipitation (rain rates < 1 mm/h detected by GPM DPR are ~1.7 and ~2.53 times more than that of PR. With regard to type classification, the dual-frequency (Ka/Ku and single frequency (Ku methods performed similarly. In both inner (the central 25 beams and outer swaths (1–12 beams and 38–49 beams of DPR, the results are consistent. GPM DPR improves precipitation type classification remarkably, reducing the misclassification of clouds and noise signals as precipitation type “other” from 10.14% of TRMM PR to 0.5%. Generally, GPM DPR exhibits the same type division for around 82.89% (71.02% of stratiform (convective precipitation events recognized by TRMM PR. With regard to the freezing level height and bright band (BB height, both radars correspond with each other very well, contributing to the consistency in stratiform precipitation classification. Both heights show clear latitudinal dependence. Results in this study shall contribute to future development of spaceborne

  7. CORCON-MOD1 modelling improvements

    International Nuclear Information System (INIS)

    Corradini, M.L.; Gonzales, F.G.; Vandervort, C.L.

    1986-01-01

    Given the unlikely occurrence of a severe accident in a light water reactor (LWR), the core may melt and slump into the reactor cavity below the reactor vessel. The interaction of the molten core with exposed concrete (a molten-core-concrete-interaction, MCCI) causes copious gas production which influences further heat transfer and concrete attack and may threaten containment integrity. In this paper the authors focus on the low-temperature phase of the MCCI where the molten pool is partially solidified, but is still capable of attacking concrete. The authors have developed some improved phenomenological models for pool freezing and molten core-coolant heat transfer and have incorporated them into the CORCON-MOD1 computer program. In the paper the authors compare the UW-CORCON/MOD1 calculations to CORCON/MOD2 and WECHSL results as well as the BETA experiments which are being conducted in Germany

  8. Mechanical instability and titanium particles induce similar transcriptomic changes in a rat model for periprosthetic osteolysis and aseptic loosening

    Directory of Open Access Journals (Sweden)

    Mehdi Amirhosseini

    2017-12-01

    Full Text Available Wear debris particles released from prosthetic bearing surfaces and mechanical instability of implants are two main causes of periprosthetic osteolysis. While particle-induced loosening has been studied extensively, mechanisms through which mechanical factors lead to implant loosening have been less investigated. This study compares the transcriptional profiles associated with osteolysis in a rat model for aseptic loosening, induced by either mechanical instability or titanium particles. Rats were exposed to mechanical instability or titanium particles. After 15 min, 3, 48 or 120 h from start of the stimulation, gene expression changes in periprosthetic bone tissue was determined by microarray analysis. Microarray data were analyzed by PANTHER Gene List Analysis tool and Ingenuity Pathway Analysis (IPA. Both types of osteolytic stimulation led to gene regulation in comparison to unstimulated controls after 3, 48 or 120 h. However, when mechanical instability was compared to titanium particles, no gene showed a statistically significant difference (fold change ≥ ±1.5 and adjusted p-value ≤ 0.05 at any time point. There was a remarkable similarity in numbers and functional classification of regulated genes. Pathway analysis showed several inflammatory pathways activated by both stimuli, including Acute Phase Response signaling, IL-6 signaling and Oncostatin M signaling. Quantitative PCR confirmed the changes in expression of key genes involved in osteolysis observed by global transcriptomics. Inflammatory mediators including interleukin (IL-6, IL-1β, chemokine (C-C motif ligand (CCL2, prostaglandin-endoperoxide synthase (Ptgs2 and leukemia inhibitory factor (LIF showed strong upregulation, as assessed by both microarray and qPCR. By investigating genome-wide expression changes we show that, despite the different nature of mechanical implant instability and titanium particles, osteolysis seems to be induced through similar biological

  9. Using sparse LU factorisation to precondition GMRES for a family of similarly structured matrices arising from process modelling

    Energy Technology Data Exchange (ETDEWEB)

    Brooking, C. [Univ. of Bath (United Kingdom)

    1996-12-31

    Process engineering software is used to simulate the operation of large chemical plants. Such simulations are used for a variety of tasks, including operator training. For the software to be of practical use for this, dynamic simulations need to run in real-time. The models that the simulation is based upon are written in terms of Differential Algebraic Equations (DAE`s). In the numerical time-integration of systems of DAE`s using an implicit method such as backward Euler, the solution of nonlinear systems is required at each integration point. When solved using Newton`s method, this leads to the repeated solution of nonsymmetric sparse linear systems. These systems range in size from 500 to 20,000 variables. A typical integration may require around 3000 timesteps, and if 4 Newton iterates were needed on each time step, then this means approximately 12,000 linear systems must be solved. The matrices produced by the simulations have a similar sparsity pattern throughout the integration. They are also severely ill-conditioned, and have widely-scattered spectra.

  10. How can model comparison help improving species distribution models?

    Directory of Open Access Journals (Sweden)

    Emmanuel Stephan Gritti

    Full Text Available Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs. However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  11. The perfectionism model of binge eating: testing unique contributions, mediating mechanisms, and cross-cultural similarities using a daily diary methodology.

    Science.gov (United States)

    Sherry, Simon B; Sabourin, Brigitte C; Hall, Peter A; Hewitt, Paul L; Flett, Gordon L; Gralnick, Tara M

    2014-12-01

    The perfectionism model of binge eating (PMOBE) is an integrative model explaining the link between perfectionism and binge eating. This model proposes socially prescribed perfectionism confers risk for binge eating by generating exposure to 4 putative binge triggers: interpersonal discrepancies, low interpersonal esteem, depressive affect, and dietary restraint. The present study addresses important gaps in knowledge by testing if these 4 binge triggers uniquely predict changes in binge eating on a daily basis and if daily variations in each binge trigger mediate the link between socially prescribed perfectionism and daily binge eating. Analyses also tested if proposed mediational models generalized across Asian and European Canadians. The PMOBE was tested in 566 undergraduate women using a 7-day daily diary methodology. Depressive affect predicted binge eating, whereas anxious affect did not. Each binge trigger uniquely contributed to binge eating on a daily basis. All binge triggers except for dietary restraint mediated the relationship between socially prescribed perfectionism and change in daily binge eating. Results suggested cross-cultural similarities, with the PMOBE applying to both Asian and European Canadian women. The present study advances understanding of the personality traits and the contextual conditions accompanying binge eating and provides an important step toward improving treatments for people suffering from eating binges and associated negative consequences.

  12. Voxel inversion of airborne electromagnetic data for improved model integration

    Science.gov (United States)

    Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders

    2014-05-01

    spatially constrained 1D models with 29 layers. For comparison, the SCI inversion models have been gridded on the same grid of the voxel inversion. The new voxel inversion and the classic SCI give similar data fit and inversion models. The voxel inversion decouples the geophysical model from the position of acquired data, and at the same time fits the data as well as the classic SCI inversion. Compared to the classic approach, the voxel inversion is better suited for informing directly (hydro)geological models and for sequential/Joint/Coupled (hydro)geological inversion. We believe that this new approach will facilitate the integration of geophysics, geology and hydrology for improved groundwater and environmental management.

  13. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2017-12-01

    The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Gravity model improvement investigation. [improved gravity model for determination of ocean geoid

    Science.gov (United States)

    Siry, J. W.; Kahn, W. D.; Bryan, J. W.; Vonbun, F. F.

    1973-01-01

    This investigation was undertaken to improve the gravity model and hence the ocean geoid. A specific objective is the determination of the gravity field and geoid with a space resolution of approximately 5 deg and a height resolution of the order of five meters. The concept of the investigation is to utilize both GEOS-C altimeter and satellite-to-satellite tracking data to achieve the gravity model improvement. It is also planned to determine the geoid in selected regions with a space resolution of about a degree and a height resolution of the order of a meter or two. The short term objectives include the study of the gravity field in the GEOS-C calibration area outlined by Goddard, Bermuda, Antigua, and Cape Kennedy, and also in the eastern Pacific area which is viewed by ATS-F.

  15. Twelve Weeks of Sprint Interval Training Improves Indices of Cardiometabolic Health Similar to Traditional Endurance Training despite a Five-Fold Lower Exercise Volume and Time Commitment.

    Directory of Open Access Journals (Sweden)

    Jenna B Gillen

    Full Text Available We investigated whether sprint interval training (SIT was a time-efficient exercise strategy to improve insulin sensitivity and other indices of cardiometabolic health to the same extent as traditional moderate-intensity continuous training (MICT. SIT involved 1 minute of intense exercise within a 10-minute time commitment, whereas MICT involved 50 minutes of continuous exercise per session.Sedentary men (27±8y; BMI = 26±6kg/m2 performed three weekly sessions of SIT (n = 9 or MICT (n = 10 for 12 weeks or served as non-training controls (n = 6. SIT involved 3x20-second 'all-out' cycle sprints (~500W interspersed with 2 minutes of cycling at 50W, whereas MICT involved 45 minutes of continuous cycling at ~70% maximal heart rate (~110W. Both protocols involved a 2-minute warm-up and 3-minute cool-down at 50W.Peak oxygen uptake increased after training by 19% in both groups (SIT: 32±7 to 38±8; MICT: 34±6 to 40±8ml/kg/min; p<0.001 for both. Insulin sensitivity index (CSI, determined by intravenous glucose tolerance tests performed before and 72 hours after training, increased similarly after SIT (4.9±2.5 to 7.5±4.7, p = 0.002 and MICT (5.0±3.3 to 6.7±5.0 x 10-4 min-1 [μU/mL]-1, p = 0.013 (p<0.05. Skeletal muscle mitochondrial content also increased similarly after SIT and MICT, as primarily reflected by the maximal activity of citrate synthase (CS; P<0.001. The corresponding changes in the control group were small for VO2peak (p = 0.99, CSI (p = 0.63 and CS (p = 0.97.Twelve weeks of brief intense interval exercise improved indices of cardiometabolic health to the same extent as traditional endurance training in sedentary men, despite a five-fold lower exercise volume and time commitment.

  16. Improved end-stage high-intensity performance but similar glycemic responses after waxy barley starch ingestion compared to dextrose in type 1 diabetes.

    Science.gov (United States)

    Gray, Benjamin J; Page, Rhydian; Turner, Daniel; West, Daniel J; Campbell, Matthew D; Kilduff, Liam P; Stephens, Jeffrey W; Bain, Stephen C; Bracken, Richard M

    2016-11-01

    Pre-exercise carbohydrate (CHO) ingestion is an effective strategy for reducing the occurrence of hypoglycemia during or after exercise in individuals with type 1 diabetes (T1DM). The metabolic effects of ingestion of different CHOs for glycemic or performance gains have been under-researched. This study compared metabolic responses and fuel use during sub-maximal and high-intensity performance running following pre-exercise ingestion of waxy barley starch (WBS) or dextrose (DEX) in T1DM. Seven participants attended the laboratory on two separate occasions following preliminary testing. On each visit participants consumed either 0.6 g/kg body mass of DEX or WBS 2 hours before a 26-minute discontinuous incremental treadmill protocol (4-minute running: 1.5-min rest) finishing at 80±4% V̇O2peak followed by a 10-min performance run on a non-motorized treadmill. Capillary blood samples were taken at rest, during and following exercise and analyzed for glucose (BG) and acid-base variables. Data (mean ± SEM) were analyzed using repeated measures ANOVA (P0.05). In the final quartile of the performance run, a greater distance was completed under WBS (WBS 323±21 vs. DEX 301±20 m, P=0.02). Consumption of WBS demonstrated similar hyperglycemic responses to dextrose ingestion but a greater rate of CHO use at rest. Interestingly, T1DM individuals displayed an improved performance at the latter stages of a high-intensity run test.

  17. High-Intensity Interval Training and Isocaloric Moderate-Intensity Continuous Training Result in Similar Improvements in Body Composition and Fitness in Obese Individuals.

    Science.gov (United States)

    Martins, Catia; Kazakova, Irina; Ludviksen, Marit; Mehus, Ingar; Wisloff, Ulrik; Kulseng, Bard; Morgan, Linda; King, Neil

    2016-06-01

    This study aimed to determine the effects of 12 weeks of isocaloric programs of high-intensity intermittent training (HIIT) or moderate-intensity continuous training (MICT) or a short-duration HIIT (1/2HIIT) inducing only half the energy deficit on a cycle ergometer, on body weight and composition, cardiovascular fitness, resting metabolism rate (RMR), respiratory exchange ratio (RER), nonexercise physical activity (PA) levels and fasting and postprandial insulin response in sedentary obese individuals. Forty-six sedentary obese individuals (30 women), with a mean BMI of 33.3 ± 2.9 kg/m2 and a mean age of 34.4 ± 8.8 years were randomly assigned to one of the three training groups: HIIT (n = 16), MICT (n = 14) or 1/2HIIT (n = 16) and exercise was performed 3 times/week for 12 weeks. Overall, there was a significant reduction in body weight, waist (p fasting insulin or insulin sensitivity with exercise or between groups. There was a tendency for a reduction in AUC insulin with exercise (p = .069), but no differences between groups. These results indicate that isocaloric training protocols of HIIT or MICT (or 1/2HIIT inducing only half the energy deficit) exert similar metabolic and cardiovascular improvements in sedentary obese individuals.

  18. Modeling the kinetics of hydrates formation using phase field method under similar conditions of petroleum pipelines; Modelagem da cinetica de formacao de hidratos utilizando o Modelo do Campo de Fase em condicoes similares a dutos de petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mabelle Biancardi; Castro, Jose Adilson de; Silva, Alexandre Jose da [Universidade Federal Fluminense (UFF), Volta Redonda, RJ (Brazil). Programa de Pos-Graduacao em Engenharia Metalurgica], e-mails: mabelle@metal.eeimvr.uff.br; adilson@metal.eeimvr.uff.br; ajs@metal.eeimvr.uff.br

    2008-10-15

    Natural hydrates are crystalline compounds that are ice-like formed under oil extraction transportation and processing. This paper deals with the kinetics of hydrate formation by using the phase field approach coupled with the transport equation of energy. The kinetic parameters of the hydrate formation were obtained by adjusting the proposed model to experimental results in similar conditions of oil extraction. The effect of thermal and nucleation conditions were investigated while the rate of formation and morphology were obtained by numerical computation. Model results of kinetics growth and morphology presented good agreement with the experimental ones. Simulation results indicated that super-cooling and pressure were decisive parameters for hydrates growth, morphology and interface thickness. (author)

  19. Hanford defined waste model limitations and improvements

    International Nuclear Information System (INIS)

    HARMSEN, R.W.

    1999-01-01

    Recommendation 93-5 Implementation Plan, Milestone 5,6.3.1.i requires issuance of this report which addresses ''updates to the tank contents model''. This report summarizes the review of the Hanford Defined Waste, Revision 4, model limitations and provides conclusions and recommendations for potential updates to the model

  20. A study of the predictive model on the user reaction time using the information amount and its similarity

    International Nuclear Information System (INIS)

    Lee, Sung Jin; Heo, Gyun Young; Chang, Soon Heung

    2004-01-01

    There are lots of studies on the user interface evaluation since it started. Recent studies focus on the contextual information of the user interface. We knew that the user reaction time increases as the amount of information increases. But, the relation between the contextual information and the user reaction time may be unknown. In this study, we proposed the similarity as one of the contextual information. We can expect that the similarity decreases the user reaction time. The goal of this study is to find some correlation about the user reaction time with both the information amount and the similarity. The experiment was performed with 20 participants. The results of experiment demonstrated our proposals

  1. An Improved SPH Technique for Fracture Modeling

    National Research Council Canada - National Science Library

    Libersky, Larry

    2000-01-01

    .... With these improvements, the MAGI code could solve the enormously complex problem of simulating Behind-Armor-Debris and subsequent interaction of the spall cloud with threat target components as well...

  2. Improved hidden Markov model for nosocomial infections.

    Science.gov (United States)

    Khader, Karim; Leecaster, Molly; Greene, Tom; Samore, Matthew; Thomas, Alun

    2014-12-01

    We propose a novel hidden Markov model (HMM) for parameter estimation in hospital transmission models, and show that commonly made simplifying assumptions can lead to severe model misspecification and poor parameter estimates. A standard HMM that embodies two commonly made simplifying assumptions, namely a fixed patient count and binomially distributed detections is compared with a new alternative HMM that does not require these simplifying assumptions. Using simulated data, we demonstrate how each of the simplifying assumptions used by the standard model leads to model misspecification, whereas the alternative model results in accurate parameter estimates. © The Authors 2013. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  3. Improvements on a Unified Dark Matter Model

    Directory of Open Access Journals (Sweden)

    Del Popolo A.

    2016-06-01

    Full Text Available We study, by means of a spherical collapse model, the effect of shear, rotation, and baryons on a generalized Chaplygin gas (gCg dominated universe. We show that shear, rotation, and the presence of baryons slow down the collapse compared to the simple spherical collapse model. The slowing down in the growth of density perturbation is able to solve the instability of the unified dark matter (UDM models described in previous papers (e.g. Sandvik et al. 2004 at the linear perturbation level, as also shown by a direct comparison of our model with previous results.

  4. Improving the physiological realism of experimental models

    NARCIS (Netherlands)

    Vinnakota, Kalyan C.; Cha, Chae Y.; Rorsman, Patrik; Balaban, Robert S.; La Gerche, Andre; Wade-Martins, Richard; Beard, Daniel A.; Jeneson, Jeroen A. L.

    The Virtual Physiological Human (VPH) project aims to develop integrative, explanatory and predictive computational models (C-Models) as numerical investigational tools to study disease, identify and design effective therapies and provide an in silico platform for drug screening. Ultimately, these

  5. An improved lower leg multibody model

    NARCIS (Netherlands)

    Cappon, H.J.; Kroonenberg, A.J. van den; Happee, R.; Wismans, J.S.H.M.

    1999-01-01

    Injuries to the lower extremities are among the most serious, non life threatening injuries occuring nowadays. In order to investigate and predict the occurence of injuries, biofidelic research tools, like mathematical human body models are needed. The model of the lower extremity, presented here,

  6. Improving stability of regional numerical ocean models

    Science.gov (United States)

    Herzfeld, Mike

    2009-02-01

    An operational limited-area ocean modelling system was developed to supply forecasts of ocean state out to 3 days. This system is designed to allow non-specialist users to locate the model domain anywhere within the Australasian region with minimum user input. The model is required to produce a stable simulation every time it is invoked. This paper outlines the methodology used to ensure the model remains stable over the wide range of circumstances it might encounter. Central to the model configuration is an alternative approach to implementing open boundary conditions in a one-way nesting environment. Approximately 170 simulations were performed on limited areas in the Australasian region to assess the model stability; of these, 130 ran successfully with a static model parameterisation allowing a statistical estimate of the model’s approach toward instability to be determined. Based on this, when the model was deemed to be approaching instability a strategy of adaptive intervention in the form of constraint on velocity and elevation was invoked to maintain stability.

  7. MRI of Mouse Models for Gliomas Shows Similarities to Humans and Can Be Used to Identify Mice for Preclinical Trials

    Directory of Open Access Journals (Sweden)

    Jason A. Koutcher

    2002-01-01

    Full Text Available Magnetic resonance imaging (MRI has been utilized for screening and detecting brain tumors in mice based upon their imaging characteristics appearance and their pattern of enhancement. Imaging of these tumors reveals many similarities to those observed in humans with identical pathology. Specifically, high-grade murine gliomas have histologic characteristics of glioblastoma multiforme (GBM with contrast enhancement after intravenous administration of gadolinium diethylenetriamine pentaacetic acid (Gd-DTPA, implying disruption of the blood-brain barrier in these tumors. In contrast, low-grade murine oligodendrogliomas do not reveal contrast enhancement, similar to human tumors. MRI can be used to identify mice with brain neoplasms as inclusion criteria in preclinical trials.

  8. Matter of Similarity and Dissimilarity in Multi-Ethnic Society: A Model of Dyadic Cultural Norms Congruence

    OpenAIRE

    Abu Bakar Hassan; Mohamad Bahtiar

    2017-01-01

    Taking this into consideration of diver cultural norms in Malaysian workplace, the propose model explores Malaysia culture and identity with a backdrop of the pushes and pulls of ethnic diversity in a Malaysia. The model seeks to understand relational norm congruence based on multiethnic in Malaysia that will be enable us to identify Malaysia cultural and identity. This is in line with recent call by various interest groups in Malaysia to focus more on model designs that capture contextual an...

  9. Improved Mathematical Models for Particle-Size Distribution Data

    African Journals Online (AJOL)

    BirukEdimon

    School of Civil & Environmental Engineering, Addis Ababa Institute of Technology,. 3. Murray Rix ... two improved mathematical models to describe ... demand further improvement to handle the PSD ... statistics and the range of the optimized.

  10. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  11. An Improved Valuation Model for Technology Companies

    OpenAIRE

    Ako Doffou

    2015-01-01

    This paper estimates some of the parameters of the Schwartz and Moon (2001)) model using cross-sectional data. Stochastic costs, future financing, capital expenditures and depreciation are taken into account. Some special conditions are also set: the speed of adjustment parameters are equal; the implied half-life of the sales growth process is linked to analyst forecasts; and the risk-adjustment parameter is inferred from the company’s observed stock price beta. The model is illustrated in th...

  12. A Model to Improve the Quality Products

    Directory of Open Access Journals (Sweden)

    Hasan GOKKAYA

    2010-08-01

    Full Text Available The topic of this paper is to present a solution who can improve product qualityfollowing the idea: “Unlike people who have verbal skills, machines use "sign language"to communicate what hurts or what has invaded their system’. Recognizing the "signs"or symptoms that the machine conveys is a required skill for those who work withmachines and are responsible for their care and feeding. The acoustic behavior of technical products is predominantly defined in the design stage, although the acoustic characteristics of machine structures can be analyze and give a solution for the actual products and create a new generation of products. The paper describes the steps intechnological process for a product and the solution who will reduce the costs with the non-quality of product and improve the management quality.

  13. Simple improvements to classical bubble nucleation models.

    Science.gov (United States)

    Tanaka, Kyoko K; Tanaka, Hidekazu; Angélil, Raymond; Diemand, Jürg

    2015-08-01

    We revisit classical nucleation theory (CNT) for the homogeneous bubble nucleation rate and improve the classical formula using a correct prefactor in the nucleation rate. Most of the previous theoretical studies have used the constant prefactor determined by the bubble growth due to the evaporation process from the bubble surface. However, the growth of bubbles is also regulated by the thermal conduction, the viscosity, and the inertia of liquid motion. These effects can decrease the prefactor significantly, especially when the liquid pressure is much smaller than the equilibrium one. The deviation in the nucleation rate between the improved formula and the CNT can be as large as several orders of magnitude. Our improved, accurate prefactor and recent advances in molecular dynamics simulations and laboratory experiments for argon bubble nucleation enable us to precisely constrain the free energy barrier for bubble nucleation. Assuming the correction to the CNT free energy is of the functional form suggested by Tolman, the precise evaluations of the free energy barriers suggest the Tolman length is ≃0.3σ independently of the temperature for argon bubble nucleation, where σ is the unit length of the Lennard-Jones potential. With this Tolman correction and our prefactor one gets accurate bubble nucleation rate predictions in the parameter range probed by current experiments and molecular dynamics simulations.

  14. Matter of Similarity and Dissimilarity in Multi-Ethnic Society: A Model of Dyadic Cultural Norms Congruence

    Directory of Open Access Journals (Sweden)

    Abu Bakar Hassan

    2017-01-01

    Full Text Available Taking this into consideration of diver cultural norms in Malaysian workplace, the propose model explores Malaysia culture and identity with a backdrop of the pushes and pulls of ethnic diversity in a Malaysia. The model seeks to understand relational norm congruence based on multiethnic in Malaysia that will be enable us to identify Malaysia cultural and identity. This is in line with recent call by various interest groups in Malaysia to focus more on model designs that capture contextual and cultural factors that influences Malaysia culture and identity.

  15. Improving Expression Power in Modeling OLAP Hierarchies

    Science.gov (United States)

    Malinowski, Elzbieta

    Data warehouses and OLAP systems form an integral part of modern decision support systems. In order to exploit both systems to their full capabilities hierarchies must be clearly defined. Hierarchies are important in analytical applications, since they provide users with the possibility to represent data at different abstraction levels. However, even though there are different kinds of hierarchies in real-world applications and some are already implemented in commercial tools, there is still a lack of a well-accepted conceptual model that allows decision-making users express their analysis needs. In this paper, we show how the conceptual multidimensional model can be used to facilitate the representation of complex hierarchies in comparison to their representation in the relational model and commercial OLAP tool, using as an example Microsoft Analysis Services.

  16. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  17. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  18. General Equilibrium Models: Improving the Microeconomics Classroom

    Science.gov (United States)

    Nicholson, Walter; Westhoff, Frank

    2009-01-01

    General equilibrium models now play important roles in many fields of economics including tax policy, environmental regulation, international trade, and economic development. The intermediate microeconomics classroom has not kept pace with these trends, however. Microeconomics textbooks primarily focus on the insights that can be drawn from the…

  19. Improvements to a model of projectile fragmentation

    International Nuclear Information System (INIS)

    Mallik, S.; Chaudhuri, G.; Das Gupta, S.

    2011-01-01

    In a recent paper [Phys. Rev. C 83, 044612 (2011)] we proposed a model for calculating cross sections of various reaction products which arise from disintegration of projectile-like fragments resulting from heavy-ion collisions at intermediate or higher energy. The model has three parts: (1) abrasion, (2) disintegration of the hot abraded projectile-like fragment (PLF) into nucleons and primary composites using a model of equilibrium statistical mechanics, and (3) possible evaporation of hot primary composites. It was assumed that the PLF resulting from abrasion has one temperature T. Data suggested that, while just one value of T seemed adequate for most cross-section calculations, a single value failed when dealing with very peripheral collisions. We have now introduced a variable T=T(b) where b is the impact parameter of the collision. We argue that there are data which not only show that T must be a function of b but, in addition, also point to an approximate value of T for a given b. We propose a very simple formula: T(b)=D 0 +D 1 [A s (b)/A 0 ] where A s (b) is the mass of the abraded PLF and A 0 is the mass of the projectile; D 0 and D 1 are constants. Using this model we compute cross sections for several collisions and compare with data.

  20. Hybrid Modeling Improves Health and Performance Monitoring

    Science.gov (United States)

    2007-01-01

    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  1. Improvement of core degradation model in ISAAC

    International Nuclear Information System (INIS)

    Kim, Dong Ha; Kim, See Darl; Park, Soo Yong

    2004-02-01

    If water inventory in the fuel channels depletes and fuel rods are exposed to steam after uncover in the pressure tube, the decay heat generated from fuel rods is transferred to the pressure tube and to the calandria tube by radiation, and finally to the moderator in the calandria tank by conduction. During this process, the cladding will be heated first and ballooned when the fuel gap internal pressure exceeds the primary system pressure. The pressure tube will be also ballooned and will touch the calandria tube, increasing heat transfer rate to the moderator. Although these situation is not desirable, the fuel channel is expected to maintain its integrity as long as the calandria tube is submerged in the moderator, because the decay heat could be removed to the moderator through radiation and conduction. Therefore, loss of coolant and moderator inside and outside the channel may cause severe core damage including horizontal fuel channel sagging and finally loss of channel integrity. The sagged channels contact with the channels located below and lose their heat transfer area to the moderator. As the accident goes further, the disintegrated fuel channels will be heated up and relocated onto the bottom of the calandria tank. If the temperature of these relocated materials is high enough to attack the calandria tank, the calandria tank would fail and molten material would contact with the calandria vault water. Steam explosion and/or rapid steam generation from this interaction may threaten containment integrity. Though a detailed model is required to simulate the severe accident at CANDU plants, complexity of phenomena itself and inner structures as well as lack of experimental data forces to choose a simple but reasonable model as the first step. ISAAC 1.0 was developed to model the basic physicochemical phenomena during the severe accident progression. At present, ISAAC 2.0 is being developed for accident management guide development and strategy evaluation. In

  2. Soil hydraulic properties near saturation, an improved conductivity model

    DEFF Research Database (Denmark)

    Børgesen, Christen Duus; Jacobsen, Ole Hørbye; Hansen, Søren

    2006-01-01

    of commonly used hydraulic conductivity models and give suggestions for improved models. Water retention and near saturated and saturated hydraulic conductivity were measured for a variety of 81 top and subsoils. The hydraulic conductivity models by van Genuchten [van Genuchten, 1980. A closed-form equation...... for predicting the hydraulic conductivity of unsaturated soils. Soil Sci. Soc. Am. J. 44, 892–898.] (vGM) and Brooks and Corey, modified by Jarvis [Jarvis, 1991. MACRO—A Model of Water Movement and Solute Transport in Macroporous Soils. Swedish University of Agricultural Sciences. Department of Soil Sciences....... Optimising a matching factor (k0) improved the fit considerably whereas optimising the l-parameter in the vGM model improved the fit only slightly. The vGM was improved with an empirical scaling function to account for the rapid increase in conductivity near saturation. Using the improved models...

  3. Self-similar decay to the marginally stable ground state in a model for film flow over inclined wavy bottoms

    Directory of Open Access Journals (Sweden)

    Tobias Hacker

    2012-04-01

    Full Text Available The integral boundary layer system (IBL with spatially periodic coefficients arises as a long wave approximation for the flow of a viscous incompressible fluid down a wavy inclined plane. The Nusselt-like stationary solution of the IBL is linearly at best marginally stable; i.e., it has essential spectrum at least up to the imaginary axis. Nevertheless, in this stable case we show that localized perturbations of the ground state decay in a self-similar way. The proof uses the renormalization group method in Bloch variables and the fact that in the stable case the Burgers equation is the amplitude equation for long waves of small amplitude in the IBL. It is the first time that such a proof is given for a quasilinear PDE with spatially periodic coefficients.

  4. Improving Flood Damage Assessment Models in Italy

    Science.gov (United States)

    Amadio, M.; Mysiak, J.; Carrera, L.; Koks, E.

    2015-12-01

    The use of Stage-Damage Curve (SDC) models is prevalent in ex-ante assessments of flood risk. To assess the potential damage of a flood event, SDCs describe a relation between water depth and the associated potential economic damage over land use. This relation is normally developed and calibrated through site-specific analysis based on ex-post damage observations. In some cases (e.g. Italy) SDCs are transferred from other countries, undermining the accuracy and reliability of simulation results. Against this background, we developed a refined SDC model for Northern Italy, underpinned by damage compensation records from a recent flood event. Our analysis considers both damage to physical assets and production losses from business interruptions. While the first is calculated based on land use information, production losses are measured through the spatial distribution of Gross Value Added (GVA). An additional component of the model assesses crop-specific agricultural losses as a function of flood seasonality. Our results show an overestimation of asset damage from non-calibrated SDC values up to a factor of 4.5 for tested land use categories. Furthermore, we estimate that production losses amount to around 6 per cent of the annual GVA. Also, maximum yield losses are less than a half of the amount predicted by the standard SDC methods.

  5. The Effect of a Model's HIV Status on Self-Perceptions: A Self-Protective Similarity Bias.

    Science.gov (United States)

    Gump, Brooks B.; Kulik, James A.

    1995-01-01

    Examined how information about another person's HIV status influences self-perceptions and behavioral intentions. Individuals perceived their own personalities and behaviors as more dissimilar to anther's if that person's HIV status was believed positive compared with negative or unknown. Exposure to HIV-positive model produced greater intentions…

  6. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  7. Modeling and improving Ethiopian pasture systems

    Science.gov (United States)

    Parisi, S. G.; Cola, G.; Gilioli, G.; Mariani, L.

    2018-05-01

    The production of pasture in Ethiopia was simulated by means of a dynamic model. Most of the country is characterized by a tropical monsoon climate with mild temperatures and precipitation mainly concentrated in the June-September period (main rainy season). The production model is driven by solar radiation and takes into account limitations due to relocation, maintenance respiration, conversion to final dry matter, temperature, water stress, and nutrients availability. The model also considers the senescence of grassland which strongly limits the nutritional value of grasses for livestock. The simulation for the 1982-2009 period, performed on gridded daily time series of rainfall and maximum and minimum temperature with a resolution of 0.5°, provided results comparable with values reported in literature. Yearly mean yield in Ethiopia ranged between 1.8 metric ton per hectare (t ha-1) (2002) and 2.6 t ha-1 (1989) of dry matter with values above 2.5 t ha-1 attained in 1983, 1985, 1989, and 2008. The Ethiopian territory has been subdivided in 1494 cells and a frequency distribution of the per-cell yearly mean pasture production has been obtained. This distribution ranges from 0 to 7 t ha-1 and it shows a right skewed distribution and a modal class between 1.5-2 t ha-1. Simulation carried out on long time series for this peculiar tropical environment give rise to as lot of results relevant by the agroecological point of view on space variability of pasture production, main limiting factors (solar radiation, precipitation, temperature), and relevant meteo-climatic cycles affecting pasture production (seasonal and inter yearly variability, ENSO). These results are useful to establish an agro-ecological zoning of the Ethiopian territory.

  8. Modeling and improving Ethiopian pasture systems

    Science.gov (United States)

    Parisi, S. G.; Cola, G.; Gilioli, G.; Mariani, L.

    2018-01-01

    The production of pasture in Ethiopia was simulated by means of a dynamic model. Most of the country is characterized by a tropical monsoon climate with mild temperatures and precipitation mainly concentrated in the June-September period (main rainy season). The production model is driven by solar radiation and takes into account limitations due to relocation, maintenance respiration, conversion to final dry matter, temperature, water stress, and nutrients availability. The model also considers the senescence of grassland which strongly limits the nutritional value of grasses for livestock. The simulation for the 1982-2009 period, performed on gridded daily time series of rainfall and maximum and minimum temperature with a resolution of 0.5°, provided results comparable with values reported in literature. Yearly mean yield in Ethiopia ranged between 1.8 metric ton per hectare (t ha-1) (2002) and 2.6 t ha-1 (1989) of dry matter with values above 2.5 t ha-1 attained in 1983, 1985, 1989, and 2008. The Ethiopian territory has been subdivided in 1494 cells and a frequency distribution of the per-cell yearly mean pasture production has been obtained. This distribution ranges from 0 to 7 t ha-1 and it shows a right skewed distribution and a modal class between 1.5-2 t ha-1. Simulation carried out on long time series for this peculiar tropical environment give rise to as lot of results relevant by the agroecological point of view on space variability of pasture production, main limiting factors (solar radiation, precipitation, temperature), and relevant meteo-climatic cycles affecting pasture production (seasonal and inter yearly variability, ENSO). These results are useful to establish an agro-ecological zoning of the Ethiopian territory.

  9. Improving Acoustic Models by Watching Television

    Science.gov (United States)

    Witbrock, Michael J.; Hauptmann, Alexander G.

    1998-01-01

    Obtaining sufficient labelled training data is a persistent difficulty for speech recognition research. Although well transcribed data is expensive to produce, there is a constant stream of challenging speech data and poor transcription broadcast as closed-captioned television. We describe a reliable unsupervised method for identifying accurately transcribed sections of these broadcasts, and show how these segments can be used to train a recognition system. Starting from acoustic models trained on the Wall Street Journal database, a single iteration of our training method reduced the word error rate on an independent broadcast television news test set from 62.2% to 59.5%.

  10. Statistical Similarities Between WSA-ENLIL+Cone Model and MAVEN in Situ Observations From November 2014 to March 2016

    Science.gov (United States)

    Lentz, C. L.; Baker, D. N.; Jaynes, A. N.; Dewey, R. M.; Lee, C. O.; Halekas, J. S.; Brain, D. A.

    2018-02-01

    Normal solar wind flows and intense solar transient events interact directly with the upper Martian atmosphere due to the absence of an intrinsic global planetary magnetic field. Since the launch of the Mars Atmosphere and Volatile EvolutioN (MAVEN) mission, there are now new means to directly observe solar wind parameters at the planet's orbital location for limited time spans. Due to MAVEN's highly elliptical orbit, in situ measurements cannot be taken while MAVEN is inside Mars' magnetosheath. To model solar wind conditions during these atmospheric and magnetospheric passages, this research project utilized the solar wind forecasting capabilities of the WSA-ENLIL+Cone model. The model was used to simulate solar wind parameters that included magnetic field magnitude, plasma particle density, dynamic pressure, proton temperature, and velocity during a four Carrington rotation-long segment. An additional simulation that lasted 18 Carrington rotations was then conducted. The precision of each simulation was examined for intervals when MAVEN was in the upstream solar wind, that is, with no exospheric or magnetospheric phenomena altering in situ measurements. It was determined that generalized, extensive simulations have comparable prediction capabilities as shorter, more comprehensive simulations. Generally, this study aimed to quantify the loss of detail in long-term simulations and to determine if extended simulations can provide accurate, continuous upstream solar wind conditions when there is a lack of in situ measurements.

  11. QUALITY IMPROVEMENT MODEL AT THE MANUFACTURING PROCESS PREPARATION LEVEL

    Directory of Open Access Journals (Sweden)

    Dusko Pavletic

    2009-12-01

    Full Text Available The paper expresses base for an operational quality improvement model at the manufacturing process preparation level. A numerous appropriate related quality assurance and improvement methods and tools are identified. Main manufacturing process principles are investigated in order to scrutinize one general model of manufacturing process and to define a manufacturing process preparation level. Development and introduction of the operational quality improvement model is based on a research conducted and results of methods and tools application possibilities in real manufacturing processes shipbuilding and automotive industry. Basic model structure is described and presented by appropriate general algorithm. Operational quality improvement model developed lays down main guidelines for practical and systematic application of quality improvements methods and tools.

  12. Model improves oil field operating cost estimates

    International Nuclear Information System (INIS)

    Glaeser, J.L.

    1996-01-01

    A detailed operating cost model that forecasts operating cost profiles toward the end of a field's life should be constructed for testing depletion strategies and plans for major oil fields. Developing a good understanding of future operating cost trends is important. Incorrectly forecasting the trend can result in bad decision making regarding investments and reservoir operating strategies. Recent projects show that significant operating expense reductions can be made in the latter stages o field depletion without significantly reducing the expected ultimate recoverable reserves. Predicting future operating cost trends is especially important for operators who are currently producing a field and must forecast the economic limit of the property. For reasons presented in this article, it is usually not correct to either assume that operating expense stays fixed in dollar terms throughout the lifetime of a field, nor is it correct to assume that operating costs stay fixed on a dollar per barrel basis

  13. An Improved Walk Model for Train Movement on Railway Network

    International Nuclear Information System (INIS)

    Li Keping; Mao Bohua; Gao Ziyou

    2009-01-01

    In this paper, we propose an improved walk model for simulating the train movement on railway network. In the proposed method, walkers represent trains. The improved walk model is a kind of the network-based simulation analysis model. Using some management rules for walker movement, walker can dynamically determine its departure and arrival times at stations. In order to test the proposed method, we simulate the train movement on a part of railway network. The numerical simulation and analytical results demonstrate that the improved model is an effective tool for simulating the train movement on railway network. Moreover, it can well capture the characteristic behaviors of train scheduling in railway traffic. (general)

  14. Multilayered epithelium in a rat model and human Barrett's esophagus: Similar expression patterns of transcription factors and differentiation markers

    Directory of Open Access Journals (Sweden)

    Yang Chung S

    2008-01-01

    Full Text Available Abstract Background In rats, esophagogastroduodenal anastomosis (EGDA without concomitant chemical carcinogen treatment leads to gastroesophageal reflux disease, multilayered epithelium (MLE, a presumed precursor in intestinal metaplasia, columnar-lined esophagus, dysplasia, and esophageal adenocarcinoma. Previously we have shown that columnar-lined esophagus in EGDA rats resembled human Barrett's esophagus (BE in its morphology, mucin features and expression of differentiation markers (Lab. Invest. 2004;84:753–765. The purpose of this study was to compare the phenotype of rat MLE with human MLE, in order to gain insight into the nature of MLE and its potential role in the development of BE. Methods Serial sectioning was performed on tissue samples from 32 EGDA rats and 13 patients with established BE. Tissue sections were immunohistochemically stained for a variety of transcription factors and differentiation markers of esophageal squamous epithelium and intestinal columnar epithelium. Results We detected MLE in 56.3% (18/32 of EGDA rats, and in all human samples. As expected, both rat and human squamous epithelium, but not intestinal metaplasia, expressed squamous transcription factors and differentiation markers (p63, Sox2, CK14 and CK4 in all cases. Both rat and human intestinal metaplasia, but not squamous epithelium, expressed intestinal transcription factors and differentiation markers (Cdx2, GATA4, HNF1α, villin and Muc2 in all cases. Rat MLE shared expression patterns of Sox2, CK4, Cdx2, GATA4, villin and Muc2 with human MLE. However, p63 and CK14 were expressed in a higher proportion of rat MLE compared to humans. Conclusion These data indicate that rat MLE shares similar properties to human MLE in its expression pattern of these markers, not withstanding small differences, and support the concept that MLE may be a transitional stage in the metaplastic conversion of squamous to columnar epithelium in BE.

  15. Fuzzy Similarity and Fuzzy Inclusion Measures in Polyline Matching: A Case Study of Potential Streams Identification for Archaeological Modelling in GIS

    Science.gov (United States)

    Ďuračiová, Renata; Rášová, Alexandra; Lieskovský, Tibor

    2017-12-01

    When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.

  16. Fuzzy Similarity and Fuzzy Inclusion Measures in Polyline Matching: A Case Study of Potential Streams Identification for Archaeological Modelling in GIS

    Directory of Open Access Journals (Sweden)

    Ďuračiová Renata

    2017-12-01

    Full Text Available When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.

  17. The role of intergenerational similarity and parenting in adolescent self-criticism: An actor-partner interdependence model.

    Science.gov (United States)

    Bleys, Dries; Soenens, Bart; Boone, Liesbet; Claes, Stephan; Vliegen, Nicole; Luyten, Patrick

    2016-06-01

    Research investigating the development of adolescent self-criticism has typically focused on the role of either parental self-criticism or parenting. This study used an actor-partner interdependence model to examine an integrated theoretical model in which achievement-oriented psychological control has an intervening role in the relation between parental and adolescent self-criticism. Additionally, the relative contribution of both parents and the moderating role of adolescent gender were examined. Participants were 284 adolescents (M = 14 years, range = 12-16 years) and their parents (M = 46 years, range = 32-63 years). Results showed that only maternal self-criticism was directly related to adolescent self-criticism. However, both parents' achievement-oriented psychological control had an intervening role in the relation between parent and adolescent self-criticism in both boys and girls. Moreover, one parent's achievement-oriented psychological control was not predicted by the self-criticism of the other parent. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  18. Baking oven improvement by performance modelling

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The first phase of the project included both the derivation of an oven model and the development of a portable, rapid-response heat-flux sensor. Heat flux (defined as the instantaneous rate of heat flow per unit at the surface of the baking biscuit and expressed in W/cm[sup 2]) has been shown to be a more useful measure of oven performance than temperature alone. Fixed-point heat-flux sensors have already been developed and marketed, but a need was expressed at the start of this project for a travelling sensor which could be used to construct a more detailed picture of heat-flux variation in an oven. The travelling monitor developed can be used to measure variations in the heat flux experienced at the surface of products being baked in a travelling oven, both when oven conditions are fixed and when they are varied. It can also be used to identify the optimum locations within an oven for fixed heat-flux probes. It has been used effectively throughout the project for both purposes. Fuel savings of 18% and 21%, respectively, were achieved with two ovens. (author)

  19. Infrared image background modeling based on improved Susan filtering

    Science.gov (United States)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  20. A model for ageing-home-care service process improvement

    OpenAIRE

    Yu, Shu-Yan; Shie, An-Jin

    2017-01-01

    The purpose of this study was to develop an integrated model to improve service processes in ageing-home-care. According to the literature, existing service processes have potential service failures that affect service quality and efficacy. However, most previous studies have only focused on conceptual model development using New Service Development (NSD) and fail to provide a systematic model to analyse potential service failures and facilitate managers developing solutions to improve the se...

  1. 68Ga/177Lu-labeled DOTA-TATE shows similar imaging and biodistribution in neuroendocrine tumor model.

    Science.gov (United States)

    Liu, Fei; Zhu, Hua; Yu, Jiangyuan; Han, Xuedi; Xie, Qinghua; Liu, Teli; Xia, Chuanqin; Li, Nan; Yang, Zhi

    2017-06-01

    Somatostatin receptors are overexpressed in neuroendocrine tumors, whose endogenous ligands are somatostatin. DOTA-TATE is an analogue of somatostatin, which shows high binding affinity to somatostatin receptors. We aim to evaluate the 68 Ga/ 177 Lu-labeling DOTA-TATE kit in neuroendocrine tumor model for molecular imaging and to try human-positron emission tomography/computed tomography imaging of 68 Ga-DOTA-TATE in neuroendocrine tumor patients. DOTA-TATE kits were formulated and radiolabeled with 68 Ga/ 177 Lu for 68 Ga/ 177 Lu-DOTA-TATE (M-DOTA-TATE). In vitro and in vivo stability of 177 Lu-DOTA-TATE were performed. Nude mice bearing human tumors were injected with 68 Ga-DOTA-TATE or 177 Lu-DOTA-TATE for micro-positron emission tomography and micro-single-photon emission computed tomography/computed tomography imaging separately, and clinical positron emission tomography/computed tomography images of 68 Ga-DOTA-TATE were obtained at 1 h post-intravenous injection from patients with neuroendocrine tumors. Micro-positron emission tomography and micro-single-photon emission computed tomography/computed tomography imaging of 68 Ga-DOTA-TATE and 177 Lu-DOTA-TATE both showed clear tumor uptake which could be blocked by excess DOTA-TATE. In addition, 68 Ga-DOTA-TATE-positron emission tomography/computed tomography imaging in neuroendocrine tumor patients could show primary and metastatic lesions. 68 Ga-DOTA-TATE and 177 Lu-DOTA-TATE could accumulate in tumors in animal models, paving the way for better clinical peptide receptor radionuclide therapy for neuroendocrine tumor patients in Asian population.

  2. Inspiration or deflation? Feeling similar or dissimilar to slim and plus-size models affects self-evaluation of restrained eaters.

    Science.gov (United States)

    Papies, Esther K; Nicolaije, Kim A H

    2012-01-01

    The present studies examined the effect of perceiving images of slim and plus-size models on restrained eaters' self-evaluation. While previous research has found that such images can lead to either inspiration or deflation, we argue that these inconsistencies can be explained by differences in perceived similarity with the presented model. The results of two studies (ns=52 and 99) confirmed this and revealed that restrained eaters with high (low) perceived similarity to the model showed more positive (negative) self-evaluations when they viewed a slim model, compared to a plus-size model. In addition, Study 2 showed that inducing in participants a similarities mindset led to more positive self-evaluations after viewing a slim compared to a plus-size model, but only among restrained eaters with a relatively high BMI. These results are discussed in the context of research on social comparison processes and with regard to interventions for protection against the possible detrimental effects of media images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. A choice modelling analysis on the similarity between distribution utilities' and industrial customers' price and quality preferences

    International Nuclear Information System (INIS)

    Soederberg, Magnus

    2008-01-01

    The Swedish Electricity Act states that electricity distribution must comply with both price and quality requirements. In order to maintain efficient regulation it is necessary to firstly, define quality attributes and secondly, determine a customer's priorities concerning price and quality attributes. If distribution utilities gain an understanding of customer preferences and incentives for reporting them, the regulator can save a lot of time by surveying them rather than their customers. This study applies a choice modelling methodology where utilities and industrial customers are asked to evaluate the same twelve choice situations in which price and four specific quality attributes are varied. The preferences expressed by the utilities, and estimated by a random parameter logit, correspond quite well with the preferences expressed by the largest industrial customers. The preferences expressed by the utilities are reasonably homogenous in relation to forms of association (private limited, public and trading partnership). If the regulator acts according to the preferences expressed by the utilities, smaller industrial customers will have to pay for quality they have not asked for. (author)

  4. Similarities and differences of serotonin and its precursors in their interactions with model membranes studied by molecular dynamics simulation

    Science.gov (United States)

    Wood, Irene; Martini, M. Florencia; Pickholz, Mónica

    2013-08-01

    In this work, we report a molecular dynamics (MD) simulations study of relevant biological molecules as serotonin (neutral and protonated) and its precursors, tryptophan and 5-hydroxy-tryptophan, in a fully hydrated bilayer of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidyl-choline (POPC). The simulations were carried out at the fluid lamellar phase of POPC at constant pressure and temperature conditions. Two guest molecules of each type were initially placed at the water phase. We have analyzed, the main localization, preferential orientation and specific interactions of the guest molecules within the bilayer. During the simulation run, the four molecules were preferentially found at the water-lipid interphase. We found that the interactions that stabilized the systems are essentially hydrogen bonds, salt bridges and cation-π. None of the guest molecules have access to the hydrophobic region of the bilayer. Besides, zwitterionic molecules have access to the water phase, while protonated serotonin is anchored in the interphase. Even taking into account that these simulations were done using a model membrane, our results suggest that the studied molecules could not cross the blood brain barrier by diffusion. These results are in good agreement with works that show that serotonin and Trp do not cross the BBB by simple diffusion.

  5. Improved model management with aggregated business process models

    NARCIS (Netherlands)

    Reijers, H.A.; Mans, R.S.; Toorn, van der R.A.

    2009-01-01

    Contemporary organizations invest much efforts in creating models of their business processes. This raises the issue of how to deal with large sets of process models that become available over time. This paper proposes an extension of Event-driven Process Chains, called the aggregate EPC (aEPC),

  6. Improved Solar-Radiation-Pressure Models for GPS Satellites

    Science.gov (United States)

    Bar-Sever, Yoaz; Kuang, Da

    2006-01-01

    A report describes a series of computational models conceived as an improvement over prior models for determining effects of solar-radiation pressure on orbits of Global Positioning System (GPS) satellites. These models are based on fitting coefficients of Fourier functions of Sun-spacecraft- Earth angles to observed spacecraft orbital motions.

  7. Bayesian Data Assimilation for Improved Modeling of Road Traffic

    NARCIS (Netherlands)

    Van Hinsbergen, C.P.Y.

    2010-01-01

    This thesis deals with the optimal use of existing models that predict certain phenomena of the road traffic system. Such models are extensively used in Advanced Traffic Information Systems (ATIS), Dynamic Traffic Management (DTM) or Model Predictive Control (MPC) approaches in order to improve the

  8. Perinatal administration of aromatase inhibitors in rodents as animal models of human male homosexuality: similarities and differences.

    Science.gov (United States)

    Olvera-Hernández, Sandra; Fernández-Guasti, Alonso

    2015-01-01

    In this chapter we briefly review the evidence supporting the existence of biological influences on sexual orientation. We focus on basic research studies that have affected the estrogen synthesis during the critical periods of brain sexual differentiation in male rat offspring with the use of aromatase inhibitors, such as 1,4,6-androstatriene-3,17 (ATD) and letrozole. The results after prenatal and/or postnatal treatment with ATD reveal that these animals, when adults, show female sexual responses, such as lordosis or proceptive behaviors, but retain their ability to display male sexual activity with a receptive female. Interestingly, the preference and sexual behavior of these rats vary depending upon the circadian rhythm.Recently, we have established that the treatment with low doses of letrozole during the second half of pregnancy produces male rat offspring, that when adults spend more time in the company of a sexually active male than with a receptive female in a preference test. In addition, they display female sexual behavior when forced to interact with a sexually experienced male and some typical male sexual behavior when faced with a sexually receptive female. Interestingly, these males displayed both sexual behavior patterns spontaneously, i.e., in absence of exogenous steroid hormone treatment. Most of these features correspond with those found in human male homosexuals; however, the "bisexual" behavior shown by the letrozole-treated rats may be related to a particular human population. All these data, taken together, permit to propose letrozole prenatal treatment as a suitable animal model to study human male homosexuality and reinforce the hypothesis that human sexual orientation is underlied by changes in the endocrine milieu during early development.

  9. An improved large signal model of InP HEMTs

    Science.gov (United States)

    Li, Tianhao; Li, Wenjun; Liu, Jun

    2018-05-01

    An improved large signal model for InP HEMTs is proposed in this paper. The channel current and charge model equations are constructed based on the Angelov model equations. Both the equations for channel current and gate charge models were all continuous and high order drivable, and the proposed gate charge model satisfied the charge conservation. For the strong leakage induced barrier reduction effect of InP HEMTs, the Angelov current model equations are improved. The channel current model could fit DC performance of devices. A 2 × 25 μm × 70 nm InP HEMT device is used to demonstrate the extraction and validation of the model, in which the model has predicted the DC I–V, C–V and bias related S parameters accurately. Project supported by the National Natural Science Foundation of China (No. 61331006).

  10. Motivation to Improve Work through Learning: A Conceptual Model

    Directory of Open Access Journals (Sweden)

    Kueh Hua Ng

    2014-12-01

    Full Text Available This study aims to enhance our current understanding of the transfer of training by proposing a conceptual model that supports the mediating role of motivation to improve work through learning about the relationship between social support and the transfer of training. The examination of motivation to improve work through motivation to improve work through a learning construct offers a holistic view pertaining to a learner's profile in a workplace setting, which emphasizes learning for the improvement of work performance. The proposed conceptual model is expected to benefit human resource development theory building, as well as field practitioners by emphasizing the motivational aspects crucial for successful transfer of training.

  11. Improved Kinetic Models for High-Speed Combustion Simulation

    National Research Council Canada - National Science Library

    Montgomery, C. J; Tang, Q; Sarofim, A. F; Bockelie, M. J; Gritton, J. K; Bozzelli, J. W; Gouldin, F. C; Fisher, E. M; Chakravarthy, S

    2008-01-01

    Report developed under an STTR contract. The overall goal of this STTR project has been to improve the realism of chemical kinetics in computational fluid dynamics modeling of hydrocarbon-fueled scramjet combustors...

  12. Improved gap conductance model for the TRAC code

    International Nuclear Information System (INIS)

    Hatch, S.W.; Mandell, D.A.

    1980-01-01

    The purpose of the present work, as indicated earlier, is to improve the present constant fuel clad spacing in TRAC-P1A without significantly increasing the computer costs. It is realized that the simple model proposed may not be accurate enough for some cases, but for the initial calculations made the DELTAR model improves the predictions over the constant Δr results of TRAC-P1A and the additional computing costs are negligible

  13. Fold-recognition and comparative modeling of human α2,3-sialyltransferases reveal their sequence and structural similarities to CstII from Campylobacter jejuni

    Directory of Open Access Journals (Sweden)

    Balaji Petety V

    2006-04-01

    Full Text Available Abstract Background The 3-D structure of none of the eukaryotic sialyltransferases (SiaTs has been determined so far. Sequence alignment algorithms such as BLAST and PSI-BLAST could not detect a homolog of these enzymes from the protein databank. SiaTs, thus, belong to the hard/medium target category in the CASP experiments. The objective of the current work is to model the 3-D structures of human SiaTs which transfer the sialic acid in α2,3-linkage viz., ST3Gal I, II, III, IV, V, and VI, using fold-recognition and comparative modeling methods. The pair-wise sequence similarity among these six enzymes ranges from 41 to 63%. Results Unlike the sequence similarity servers, fold-recognition servers identified CstII, a α2,3/8 dual-activity SiaT from Campylobacter jejuni as the homolog of all the six ST3Gals; the level of sequence similarity between CstII and ST3Gals is only 15–20% and the similarity is restricted to well-characterized motif regions of ST3Gals. Deriving template-target sequence alignments for the entire ST3Gal sequence was not straightforward: the fold-recognition servers could not find a template for the region preceding the L-motif and that between the L- and S-motifs. Multiple structural templates were identified to model these regions and template identification-modeling-evaluation had to be performed iteratively to choose the most appropriate templates. The modeled structures have acceptable stereochemical properties and are also able to provide qualitative rationalizations for some of the site-directed mutagenesis results reported in literature. Apart from the predicted models, an unexpected but valuable finding from this study is the sequential and structural relatedness of family GT42 and family GT29 SiaTs. Conclusion The modeled 3-D structures can be used for docking and other modeling studies and for the rational identification of residues to be mutated to impart desired properties such as altered stability, substrate

  14. A participatory model for improving occupational health and safety: improving informal sector working conditions in Thailand.

    Science.gov (United States)

    Manothum, Aniruth; Rukijkanpanich, Jittra; Thawesaengskulthai, Damrong; Thampitakkul, Boonwa; Chaikittiporn, Chalermchai; Arphorn, Sara

    2009-01-01

    The purpose of this study was to evaluate the implementation of an Occupational Health and Safety Management Model for informal sector workers in Thailand. The studied model was characterized by participatory approaches to preliminary assessment, observation of informal business practices, group discussion and participation, and the use of environmental measurements and samples. This model consisted of four processes: capacity building, risk analysis, problem solving, and monitoring and control. The participants consisted of four local labor groups from different regions, including wood carving, hand-weaving, artificial flower making, and batik processing workers. The results demonstrated that, as a result of applying the model, the working conditions of the informal sector workers had improved to meet necessary standards. This model encouraged the use of local networks, which led to cooperation within the groups to create appropriate technologies to solve their problems. The authors suggest that this model could effectively be applied elsewhere to improve informal sector working conditions on a broader scale.

  15. Improvement and Validation of Weld Residual Stress Modelling Procedure

    International Nuclear Information System (INIS)

    Zang, Weilin; Gunnars, Jens; Dong, Pingsha; Hong, Jeong K.

    2009-06-01

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  16. Fedora Content Modelling for Improved Services for Research Databases

    DEFF Research Database (Denmark)

    Elbæk, Mikael Karstensen; Heller, Alfred; Pedersen, Gert Schmeltz

    A re-implementation of the research database of the Technical University of Denmark, DTU, is based on Fedora. The backbone consists of content models for primary and secondary entities and their relationships, giving flexible and powerful extraction capabilities for interoperability and reporting....... By adopting such an abstract data model, the platform enables new and improved services for researchers, librarians and administrators....

  17. Improvement of the projection models for radiogenic cancer risk

    International Nuclear Information System (INIS)

    Tong Jian

    2005-01-01

    Calculations of radiogenic cancer risk are based on the risk projection models for specific cancer sites. Improvement has been made for the parameters used in the previous models including introductions of mortality and morbidity risk coefficients, and age-/ gender-specific risk coefficients. These coefficients have been applied to calculate the radiogenic cancer risks for specific organs and radionuclides under different exposure scenarios. (authors)

  18. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  19. Concurrent validity and clinical utility of the HCR-20V3 compared with the HCR-20 in forensic mental health nursing: similar tools but improved method.

    Science.gov (United States)

    Bjørkly, Stål; Eidhammer, Gunnar; Selmer, Lars Erik

    2014-01-01

    The main scope of this small-scale investigation was to compare clinical application of the HCR-20V3 with its predecessor, the HCR-20. To explore concurrent validity, two experienced nurses assessed 20 forensic mental health service patients with the tools. Estimates of internal consistency for the HCR-20 and the HCR-20V3 were calculated by Cronbach's alpha for two levels of measurement: the H-, C-, and R-scales and the total sum scores. We found moderate (C-scale) to good (H- and R- scales and aggregate scores) estimates of internal consistency and significant differences for the two versions of the HCR. This finding indicates that the two versions reflect common underlying dimensions and that there still appears to be differences between V2 and V3 ratings for the same patients. A case from forensic mental health was used to illustrate similarities and differences in assessment results between the two HCR-20 versions. The case illustration depicts clinical use of the HCR-20V3 and application of two structured nursing interventions pertaining to the risk management part of the tool. According to our experience, Version 3 is superior to Version 2 concerning: (a) item clarity; (b) the distinction between presence and relevance of risk factors; (c) the integration of risk formulation and risk scenario; and (d) the explicit demand to construct a risk management plan as part of the standard assessment procedure.

  20. Improving Catastrophe Modeling for Business Interruption Insurance Needs.

    Science.gov (United States)

    Rose, Adam; Huyck, Charles K

    2016-10-01

    While catastrophe (CAT) modeling of property damage is well developed, modeling of business interruption (BI) lags far behind. One reason is the crude nature of functional relationships in CAT models that translate property damage into BI. Another is that estimating BI losses is more complicated because it depends greatly on public and private decisions during recovery with respect to resilience tactics that dampen losses by using remaining resources more efficiently to maintain business function and to recover more quickly. This article proposes a framework for improving hazard loss estimation for BI insurance needs. Improved data collection that allows for analysis at the level of individual facilities within a company can improve matching the facilities with the effectiveness of individual forms of resilience, such as accessing inventories, relocating operations, and accelerating repair, and can therefore improve estimation accuracy. We then illustrate the difference this can make in a case study example of losses from a hurricane. © 2016 Society for Risk Analysis.

  1. Improvement of a near wake model for trailing vorticity

    DEFF Research Database (Denmark)

    Pirrung, Georg; Hansen, Morten Hartvig; Aagaard Madsen, Helge

    2014-01-01

    A near wake model, originally proposed by Beddoes, is further developed. The purpose of the model is to account for the radially dependent time constants of the fast aerodynamic response and to provide a tip loss correction. It is based on lifting line theory and models the downwash due to roughly...... the first 90 degrees of rotation. This restriction of the model to the near wake allows for using a computationally efficient indicial function algorithm. The aim of this study is to improve the accuracy of the downwash close to the root and tip of the blade and to decrease the sensitivity of the model...... to temporal discretization, both regarding numerical stability and quality of the results. The modified near wake model is coupled to an aerodynamics model, which consists of a blade element momentum model with dynamic inflow for the far wake and a 2D shed vorticity model that simulates the unsteady buildup...

  2. Titan I propulsion system modeling and possible performance improvements

    Science.gov (United States)

    Giusti, Oreste

    This thesis features the Titan I propulsion systems and offers data-supported suggestions for improvements to increase performance. The original propulsion systems were modeled both graphically in CAD and via equations. Due to the limited availability of published information, it was necessary to create a more detailed, secondary set of models. Various engineering equations---pertinent to rocket engine design---were implemented in order to generate the desired extra detail. This study describes how these new models were then imported into the ESI CFD Suite. Various parameters are applied to these imported models as inputs that include, for example, bi-propellant combinations, pressure, temperatures, and mass flow rates. The results were then processed with ESI VIEW, which is visualization software. The output files were analyzed for forces in the nozzle, and various results were generated, including sea level thrust and ISP. Experimental data are provided to compare the original engine configuration models to the derivative suggested improvement models.

  3. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  4. Low- and High-Volume Water-Based Resistance Training Induces Similar Strength and Functional Capacity Improvements in Older Women: A Randomized Study.

    Science.gov (United States)

    Reichert, Thaís; Delevatti, Rodrigo Sudatti; Prado, Alexandre Konig Garcia; Bagatini, Natália Carvalho; Simmer, Nicole Monticelli; Meinerz, Andressa Pellegrini; Barroso, Bruna Machado; Costa, Rochelle Rocha; Kanitz, Ana Carolina; Kruel, Luiz Fernando Martins

    2018-03-27

    Water-based resistance training (WRT) has been indicated to promote strength gains in elderly population. However, no study has compared different training strategies to identify the most efficient one. The aim of this study was to compare the effects of 3 WRT strategies on the strength and functional capacity of older women. In total, 36 women were randomly allocated to training groups: simple set of 30 seconds [1 × 30s; 66.41 (1.36) y; n = 12], multiple sets of 10 seconds [3 × 10s; 66.50 (1.43) y; n = 11], and simple set of 10 seconds [1 × 10s; 65.23 (1.09) y; n = 13]. Training lasted for 12 weeks. The maximal dynamic strength (in kilograms) and muscular endurance (number of repetitions) of knee extension, knee flexion, elbow flexion, and bench press, as well as functional capacity (number of repetitions), were evaluated. All types of training promoted similar gains in maximal dynamic strength of knee extension and flexion as well as elbow flexion. Only the 1 × 30s and 1 × 10s groups presented increments in bench press maximal strength. All 3 groups showed increases in muscular endurance in all exercises and functional capacity. WRT using long- or short-duration simple sets promotes the same gains in strength and functional capacity in older women as does WRT using multiple sets.

  5. An improved market penetration model for wind energy technology forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Lund, P D [Helsinki Univ. of Technology, Espoo (Finland). Advanced Energy Systems

    1996-12-31

    An improved market penetration model with application to wind energy forecasting is presented. In the model, a technology diffusion model and manufacturing learning curve are combined. Based on a 85% progress ratio that was found for European wind manufactures and on wind market statistics, an additional wind power capacity of ca 4 GW is needed in Europe to reach a 30 % price reduction. A full breakthrough to low-cost utility bulk power markets could be achieved at a 24 GW level. (author)

  6. An improved market penetration model for wind energy technology forecasting

    International Nuclear Information System (INIS)

    Lund, P.D.

    1995-01-01

    An improved market penetration model with application to wind energy forecasting is presented. In the model, a technology diffusion model and manufacturing learning curve are combined. Based on a 85% progress ratio that was found for European wind manufactures and on wind market statistics, an additional wind power capacity of ca 4 GW is needed in Europe to reach a 30 % price reduction. A full breakthrough to low-cost utility bulk power markets could be achieved at a 24 GW level. (author)

  7. An improved market penetration model for wind energy technology forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Lund, P.D. [Helsinki Univ. of Technology, Espoo (Finland). Advanced Energy Systems

    1995-12-31

    An improved market penetration model with application to wind energy forecasting is presented. In the model, a technology diffusion model and manufacturing learning curve are combined. Based on a 85% progress ratio that was found for European wind manufactures and on wind market statistics, an additional wind power capacity of ca 4 GW is needed in Europe to reach a 30 % price reduction. A full breakthrough to low-cost utility bulk power markets could be achieved at a 24 GW level. (author)

  8. INTEGRATED COST MODEL FOR IMPROVING THE PRODUCTION IN COMPANIES

    Directory of Open Access Journals (Sweden)

    Zuzana Hajduova

    2014-12-01

    Full Text Available Purpose: All processes in the company play important role in ensuring functional integrated management system. We point out the importance of need for a systematic approach to the use of quantitative, but especially statistical methods for modelling the cost of the improvement activities that are part of an integrated management system. Development of integrated management systems worldwide leads towards building of systematic procedures of implementation maintenance and improvement of all systems according to the requirements of all the sides involved.Methodology: Statistical evaluation of the economic indicators of improvement costs and the need for a systematic approach to their management in terms of integrated management systems have become a key role also in the management of processes in the company Cu Drôt, a.s. The aim of this publication is to highlight the importance of proper implementation of statistical methods in the process of improvement costs management in the integrated management system of current market conditions and document the legitimacy of a systematic approach in the area of monitoring and analysing indicators of improvement with the aim of the efficient process management of company. We provide specific example of the implementation of appropriate statistical methods in the production of copper wire in a company Cu Drôt, a.s. This publication also aims to create a model for the estimation of integrated improvement costs, which through the use of statistical methods in the company Cu Drôt, a.s. is used to support decision-making on improving efficiency.Findings: In the present publication, a method for modelling the improvement process, by an integrated manner, is proposed. It is a method in which the basic attributes of the improvement in quality, safety and environment are considered and synergistically combined in the same improvement project. The work examines the use of sophisticated quantitative, especially

  9. Process correlation analysis model for process improvement identification.

    Science.gov (United States)

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  10. Improvement of the design model for SMART fuel assembly

    International Nuclear Information System (INIS)

    Zee, Sung Kyun; Yim, Jeong Sik

    2001-04-01

    A Study on the design improvement of the TEP, BEP and Hoddown spring of a fuel assembly for SMART was performed. Cut boundary Interpolation Method was applied to get more accurate results of stress and strain distribution from the results of the coarse model calculation. The improved results were compared with that of a coarse one. The finer model predicted slightly higher stress and strain distribution than the coarse model, which meant the results of the coarse model was not converged. Considering that the test results always showed much less stress than the FEM and the location of the peak stress of the refined model, the pressure stress on the loading point seemed to contribute significantly to the stresses. Judging from the fact that the peak stress appeared only at the local area, the results of the refined model were considered enough to be a conservative prediction of the stress levels. The slot of the guide thimble screw was ignored to get how much thickness of the flow plate can be reduced in case of optimization of the thickness and also cut off the screw dent hole was included for the actual geometry. For the BEP, the leg and web were also included in the model and the results with and without the leg alignment support were compared. Finally, the holddown spring which is important during the in-reactor behavior of the FA was modeled more realistic and improved to include the effects of the friction between the leaves and the loading surface. Using this improved model, it was possible that the spring characteristics were predicted more accurate to the test results. From the analysis of the spring characteristics, the local plastic area controled the characteristics of the spring dominantly which implied that it was necessary for the design of the leaf to be optimized for the improvement of the plastic behavior of the leaf spring

  11. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  12. Improvement of a combustion model in MELCOR code

    International Nuclear Information System (INIS)

    Ogino, Masao; Hashimoto, Takashi

    1999-01-01

    NUPEC has been improving a hydrogen combustion model in MELCOR code for severe accident analysis. In the proposed combustion model, the flame velocity in a node was predicted using five different flame front shapes of fireball, prism, bubble, spherical jet, and plane jet. For validation of the proposed model, the results of the Battelle multi-compartment hydrogen combustion test were used. The selected test cases for the study were Hx-6, 13, 14, 20 and Ix-2 which had two, three or four compartments under homogeneous hydrogen concentration of 5 to 10 vol%. The proposed model could predict well the combustion behavior in multi-compartment containment geometry on the whole. MELCOR code, incorporating the present combustion model, can simulate combustion behavior during severe accident with acceptable computing time and some degree of accuracy. The applicability study of the improved MELCOR code to the actual reactor plants will be further continued. (author)

  13. Can model weighting improve probabilistic projections of climate change?

    Energy Technology Data Exchange (ETDEWEB)

    Raeisaenen, Jouni; Ylhaeisi, Jussi S. [Department of Physics, P.O. Box 48, University of Helsinki (Finland)

    2012-10-15

    Recently, Raeisaenen and co-authors proposed a weighting scheme in which the relationship between observable climate and climate change within a multi-model ensemble determines to what extent agreement with observations affects model weights in climate change projection. Within the Third Coupled Model Intercomparison Project (CMIP3) dataset, this scheme slightly improved the cross-validated accuracy of deterministic projections of temperature change. Here the same scheme is applied to probabilistic temperature change projection, under the strong limiting assumption that the CMIP3 ensemble spans the actual modeling uncertainty. Cross-validation suggests that probabilistic temperature change projections may also be improved by this weighting scheme. However, the improvement relative to uniform weighting is smaller in the tail-sensitive logarithmic score than in the continuous ranked probability score. The impact of the weighting on projection of real-world twenty-first century temperature change is modest in most parts of the world. However, in some areas mainly over the high-latitude oceans, the mean of the distribution is substantially changed and/or the distribution is considerably narrowed. The weights of individual models vary strongly with location, so that a model that receives nearly zero weight in some area may still get a large weight elsewhere. Although the details of this variation are method-specific, it suggests that the relative strengths of different models may be difficult to harness by weighting schemes that use spatially uniform model weights. (orig.)

  14. Reranking candidate gene models with cross-species comparison for improved gene prediction

    Directory of Open Access Journals (Sweden)

    Pereira Fernando CN

    2008-10-01

    Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.

  15. Kefir drink causes a significant yet similar improvement in serum lipid profile, compared with low-fat milk, in a dairy-rich diet in overweight or obese premenopausal women: A randomized controlled trial.

    Science.gov (United States)

    Fathi, Yasamin; Ghodrati, Naeimeh; Zibaeenezhad, Mohammad-Javad; Faghih, Shiva

    Controversy exists as to whether the lipid-lowering properties of kefir drink (a fermented probiotic dairy product) in animal models could be replicated in humans. To assess and compare the potential lipid-lowering effects of kefir drink with low-fat milk in a dairy-rich diet in overweight or obese premenopausal women. In this 8-week, single-center, multiarm, parallel-group, outpatient, randomized controlled trial, 75 eligible Iranian women aged 25 to 45 years were randomly allocated to kefir, milk, or control groups. Women in the control group received a weight-maintenance diet containing 2 servings/d of low-fat dairy products, whereas subjects in the milk and kefir groups received a similar diet containing 2 additional servings/d (a total of 4 servings/d) of dairy products from low-fat milk or kefir drink, respectively. At baseline and study end point, serum levels/ratios of total cholesterol (TC), low- and high-density lipoprotein cholesterol (LDLC and HDLC), triglyceride, Non-HDLC, TC/HDLC, LDLC/HDLC, and triglyceride/LDLC were measured as outcome measures. After 8 weeks, subjects in the kefir group had significantly lower serum levels/ratios of lipoproteins than those in the control group (mean between-group differences were -10.4 mg/dL, -9.7 mg/dL, -11.5 mg/dL, -0.4, and -0.3 for TC, LDLC, non-HDLC, TC/HDLC, and LDLC/HDLC, respectively; all P < .05). Similar results were observed in the milk group. However, no such significant differences were found between the kefir and milk groups. Kefir drink causes a significant yet similar improvement in serum lipid profile, compared with low-fat milk, in a dairy-rich diet in overweight or obese premenopausal women. Copyright © 2016 National Lipid Association. Published by Elsevier Inc. All rights reserved.

  16. Both food restriction and high-fat diet during gestation induce low birth weight and altered physical activity in adult rat offspring: the "Similarities in the Inequalities" model.

    Directory of Open Access Journals (Sweden)

    Fábio da Silva Cunha

    Full Text Available We have previously described a theoretical model in humans, called "Similarities in the Inequalities", in which extremely unequal social backgrounds coexist in a complex scenario promoting similar health outcomes in adulthood. Based on the potential applicability of and to further explore the "similarities in the inequalities" phenomenon, this study used a rat model to investigate the effect of different nutritional backgrounds during gestation on the willingness of offspring to engage in physical activity in adulthood. Sprague-Dawley rats were time mated and randomly allocated to one of three dietary groups: Control (Adlib, receiving standard laboratory chow ad libitum; 50% food restricted (FR, receiving 50% of the ad libitum-fed dam's habitual intake; or high-fat diet (HF, receiving a diet containing 23% fat. The diets were provided from day 10 of pregnancy until weaning. Within 24 hours of birth, pups were cross-fostered to other dams, forming the following groups: Adlib_Adlib, FR_Adlib, and HF_Adlib. Maternal chow consumption and weight gain, and offspring birth weight, growth, physical activity (one week of free exercise in running wheels, abdominal adiposity and biochemical data were evaluated. Western blot was performed to assess D2 receptors in the dorsal striatum. The "similarities in the inequalities" effect was observed on birth weight (both FR and HF groups were smaller than the Adlib group at birth and physical activity (both FR_Adlib and HF_Adlib groups were different from the Adlib_Adlib group, with less active males and more active females. Our findings contribute to the view that health inequalities in fetal life may program the health outcomes manifested in offspring adult life (such as altered physical activity and metabolic parameters, probably through different biological mechanisms.

  17. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Shao Jie

    2014-01-01

    Full Text Available A modeling based on the improved Elman neural network (IENN is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL model, Chebyshev neural network (CNN model, and basic Elman neural network (BENN model, the proposed model has better performance.

  18. Improved Modeling and Prediction of Surface Wave Amplitudes

    Science.gov (United States)

    2017-05-31

    AFRL-RV-PS- AFRL-RV-PS- TR-2017-0162 TR-2017-0162 IMPROVED MODELING AND PREDICTION OF SURFACE WAVE AMPLITUDES Jeffry L. Stevens, et al. Leidos...data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented...SUBTITLE Improved Modeling and Prediction of Surface Wave Amplitudes 5a. CONTRACT NUMBER FA9453-14-C-0225 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  19. Towards improved modeling of steel-concrete composite wall elements

    International Nuclear Information System (INIS)

    Vecchio, Frank J.; McQuade, Ian

    2011-01-01

    Highlights: → Improved analysis of double skinned steel concrete composite containment walls. → Smeared rotating crack concept applied in formulation of new analytical model. → Model implemented into finite element program; numerically stable and robust. → Models behavior of shear-critical elements with greater ease and improved accuracy. → Accurate assessments of strength, deformation and failure mode of test specimens. - Abstract: The Disturbed Stress Field Model, a smeared rotating crack model for reinforced concrete based on the Modified Compression Field Theory, is adapted to the analysis of double-skin steel-concrete wall elements. The computational model is then incorporated into a two-dimensional nonlinear finite element analysis algorithm. Verification studies are undertaken by modeling various test specimens, including panel elements subject to uniaxial compression, panel elements subjected to in-plane shear, and wall specimens subjected to reversed cyclic lateral displacements. In all cases, the analysis model is found to provide accurate calculations of structural load capacities, pre- and post-peak displacement responses, post-peak ductility, chronology of damage, and ultimate failure mode. Minor deficiencies are found in regards to the accurate portrayal of faceplate buckling and the effects of interfacial slip between the faceplates and the concrete. Other aspects of the modeling procedure that are in need of further research and development are also identified and discussed.

  20. Function Modelling Of The Market And Assessing The Degree Of Similarity Between Real Properties - Dependent Or Independent Procedures In The Process Of Office Property Valuation

    Directory of Open Access Journals (Sweden)

    Barańska Anna

    2015-09-01

    Full Text Available Referring to the developed and presented in previous publications (e.g. Barańska 2011 two-stage algorithm for real estate valuation, this article addresses the problem of the relationship between the two stages of the algorithm. An essential part of the first stage is the multi-dimensional function modelling of the real estate market. As a result of selecting the model best fitted to the market data, in which the dependent variable is always the price of a real property, a set of market attributes is obtained, which in this model are considered to be price-determining. In the second stage, from the collection of real estate which served as a database in the process of estimating model parameters, the selected objects are those which are most similar to the one subject to valuation and form the basis for predicting the final value of the property being valued. Assessing the degree of similarity between real properties can be carried out based on the full spectrum of real estate attributes that potentially affect their value and which it is possible to gather information about, or only on the basis of those attributes which were considered to be price-determining in function modelling. It can also be performed by various methods. This article has examined the effect of various approaches on the final value of the property obtained using the two-stage prediction. In order fulfill the study aim precisely as possible, the results of each calculation step of the algorithm have been investigated in detail. Each of them points to the independence of the two procedures.

  1. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  2. An Improved QTM Subdivision Model with Approximate Equal-area

    Directory of Open Access Journals (Sweden)

    ZHAO Xuesheng

    2016-01-01

    Full Text Available To overcome the defect of large area deformation in the traditional QTM subdivision model, an improved subdivision model is proposed which based on the “parallel method” and the thought of the equal area subdivision with changed-longitude-latitude. By adjusting the position of the parallel, this model ensures that the grid area between two adjacent parallels combined with no variation, so as to control area variation and variation accumulation of the QTM grid. The experimental results show that this improved model not only remains some advantages of the traditional QTM model(such as the simple calculation and the clear corresponding relationship with longitude/latitude grid, etc, but also has the following advantages: ①this improved model has a better convergence than the traditional one. The ratio of area_max/min finally converges to 1.38, far less than 1.73 of the “parallel method”; ②the grid units in middle and low latitude regions have small area variations and successive distributions; meanwhile, with the increase of subdivision level, the grid units with large variations gradually concentrate to the poles; ③the area variation of grid unit will not cumulate with the increasing of subdivision level.

  3. Fast business process similarity search

    NARCIS (Netherlands)

    Yan, Z.; Dijkman, R.M.; Grefen, P.W.P.J.

    2012-01-01

    Nowadays, it is common for organizations to maintain collections of hundreds or even thousands of business processes. Techniques exist to search through such a collection, for business process models that are similar to a given query model. However, those techniques compare the query model to each

  4. An Improved Model for FE Modeling and Simulation of Closed Cell Al-Alloy Foams

    OpenAIRE

    Hasan, MD. Anwarul

    2010-01-01

    Cell wall material properties of Al-alloy foams have been derived by a combination of nanoindentation experiment and numerical simulation. Using the derived material properties in FE (finite element) modeling of foams, the existing constitutive models of closed-cell Al-alloy foams have been evaluated against experimental results. An improved representative model has been proposed for FE analysis of closed-cell Al-alloy foams. The improved model consists of a combination of spherical and cruci...

  5. Study on Software Quality Improvement based on Rayleigh Model and PDCA Model

    OpenAIRE

    Ning Jingfeng; Hu Ming

    2013-01-01

    As the software industry gradually becomes mature, software quality is regarded as the life of a software enterprise. This article discusses how to improve the quality of software, applies Rayleigh model and PDCA model to the software quality management, combines with the defect removal effectiveness index, exerts PDCA model to solve the problem of quality management objectives when using the Rayleigh model in bidirectional quality improvement strategies of software quality management, a...

  6. Improving Hydrological Models of The Netherlands Using ALOS PALSAR

    NARCIS (Netherlands)

    Dekker, R.J.; Schuurmans, J.M.; Berendrecht, W.L.; Borren, W.; Ven, T.J.M. van de; Westerhoff, R.S.

    2010-01-01

    In this paper the improvement of the hydrological model metaSWAP of The Netherlands, with respect to soil moisture, is studied using remote sensing data. Therefore we investigate the value of ALOS PALSAR data of 2007 in combination with the method of Dubois et al. [1] for measuring volumetric

  7. Improving Project Management Using Formal Models and Architectures

    Science.gov (United States)

    Kahn, Theodore; Sturken, Ian

    2011-01-01

    This talk discusses the advantages formal modeling and architecture brings to project management. These emerging technologies have both great potential and challenges for improving information available for decision-making. The presentation covers standards, tools and cultural issues needing consideration, and includes lessons learned from projects the presenters have worked on.

  8. Improving Perovskite Solar Cells: Insights From a Validated Device Model

    NARCIS (Netherlands)

    Sherkar, Tejas S.; Momblona, Cristina; Gil-Escrig, Lidon; Bolink, Henk J.; Koster, L. Jan Anton

    2017-01-01

    To improve the efficiency of existing perovskite solar cells (PSCs), a detailed understanding of the underlying device physics during their operation is essential. Here, a device model has been developed and validated that describes the operation of PSCs and quantitatively explains the role of

  9. Guiding and Modelling Quality Improvement in Higher Education Institutions

    Science.gov (United States)

    Little, Daniel

    2015-01-01

    The article considers the process of creating quality improvement in higher education institutions from the point of view of current organisational theory and social-science modelling techniques. The author considers the higher education institution as a functioning complex of rules, norms and other organisational features and reviews the social…

  10. Promoting Continuous Quality Improvement in Online Teaching: The META Model

    Science.gov (United States)

    Dittmar, Eileen; McCracken, Holly

    2012-01-01

    Experienced e-learning faculty members share strategies for implementing a comprehensive postsecondary faculty development program essential to continuous improvement of instructional skills. The high-impact META Model (centered around Mentoring, Engagement, Technology, and Assessment) promotes information sharing and content creation, and fosters…

  11. The Continuous Improvement Model: A K-12 Literacy Focus

    Science.gov (United States)

    Brown, Jennifer V.

    2013-01-01

    The purpose of the study was to determine if the eight steps of the Continuous Improvement Model (CIM) provided a framework to raise achievement and to focus educators in identifying high-yield literacy strategies. This study sought to determine if an examination of the assessment data in reading revealed differences among schools that fully,…

  12. an improved structural model for seismic analysis of tall frames

    African Journals Online (AJOL)

    Dr Obe

    ABSTRACT. This paper proposed and examined an improved structural model ... The equation of motion of multI-storey building shown in fig. 2 can be ... The response of the nth mode at any time t of the MDOF system demands the solution of ...

  13. Improvement of a near wake model for trailing vorticity

    International Nuclear Information System (INIS)

    Pirrung, G R; Hansen, M H; Madsen, H A

    2014-01-01

    A near wake model, originally proposed by Beddoes, is further developed. The purpose of the model is to account for the radially dependent time constants of the fast aerodynamic response and to provide a tip loss correction. It is based on lifting line theory and models the downwash due to roughly the first 90 degrees of rotation. This restriction of the model to the near wake allows for using a computationally efficient indicial function algorithm. The aim of this study is to improve the accuracy of the downwash close to the root and tip of the blade and to decrease the sensitivity of the model to temporal discretization, both regarding numerical stability and quality of the results. The modified near wake model is coupled to an aerodynamics model, which consists of a blade element momentum model with dynamic inflow for the far wake and a 2D shed vorticity model that simulates the unsteady buildup of both lift and circulation in the attached flow region. The near wake model is validated against the test case of a finite wing with constant elliptical bound circulation. An unsteady simulation of the NREL 5 MW rotor shows the functionality of the coupled model

  14. Studies of a general flat space/boson star transition model in a box through a language similar to holographic superconductors

    Science.gov (United States)

    Peng, Yan

    2017-07-01

    We study a general flat space/boson star transition model in quasi-local ensemble through approaches familiar from holographic superconductor theories. We manage to find a parameter ψ 2, which is proved to be useful in disclosing properties of phase transitions. In this work, we explore effects of the scalar mass, scalar charge and Stückelberg mechanism on the critical phase transition points and the order of transitions mainly from behaviors of the parameter ψ 2. We mention that properties of transitions in quasi-local gravity are strikingly similar to those in holographic superconductor models. We also obtain an analytical relation ψ 2 ∝ ( μ - μ c )1/2, which also holds for the condensed scalar operator in the holographic insulator/superconductor system in accordance with mean field theories.

  15. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models

    Science.gov (United States)

    Marquette, Michele L.; Sognier, Marguerite A.

    2013-01-01

    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  16. New Similarity Functions

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina

    2016-01-01

    spaces, in addition to their similarity in the vector space. Prioritized Weighted Feature Distance (PWFD) works similarly as WFD, but provides the ability to give priorities to desirable features. The accuracy of the proposed functions are compared with other similarity functions on several data sets....... Our results show that the proposed functions work better than other methods proposed in the literature....

  17. Phoneme Similarity and Confusability

    Science.gov (United States)

    Bailey, T.M.; Hahn, U.

    2005-01-01

    Similarity between component speech sounds influences language processing in numerous ways. Explanation and detailed prediction of linguistic performance consequently requires an understanding of these basic similarities. The research reported in this paper contrasts two broad classes of approach to the issue of phoneme similarity-theoretically…

  18. Development of Improved Mechanistic Deterioration Models for Flexible Pavements

    DEFF Research Database (Denmark)

    Ullidtz, Per; Ertman, Hans Larsen

    1998-01-01

    The paper describes a pilot study in Denmark with the main objective of developing improved mechanistic deterioration models for flexible pavements based on an accelerated full scale test on an instrumented pavement in the Danish Road Tessting Machine. The study was the first in "International...... Pavement Subgrade Performance Study" sponsored by the Federal Highway Administration (FHWA), USA. The paper describes in detail the data analysis and the resulting models for rutting, roughness, and a model for the plastic strain in the subgrade.The reader will get an understanding of the work needed...

  19. Improved hydrogen combustion model for multi-compartment analysis

    International Nuclear Information System (INIS)

    Ogino, Masao; Hashimoto, Takashi

    2000-01-01

    NUPEC has been improving a hydrogen combustion model in MELCOR code for severe accident analysis. In the proposed combustion model, the flame velocity in a node was predicted using six different flame front shapes of fireball, prism, bubble, spherical jet, plane jet, and parallelepiped. A verification study of the proposed model was carried out using the NUPEC large-scale combustion test results following the previous work in which the GRS/Battelle multi-compartment combustion test results had been used. The selected test cases for the study were the premixed test and the scenario-oriented test which simulated the severe accident sequences of an actual plant. The improved MELCOR code replaced by the proposed model could predict sufficiently both results of the premixed test and the scenario-oriented test of NUPEC large-scale test. The improved MELCOR code was confirmed to simulate the combustion behavior in the multi-compartment containment vessel during a severe accident with acceptable degree of accuracy. Application of the new model to the LWR severe accident analysis will be continued. (author)

  20. On improving the communication between models and data.

    Science.gov (United States)

    Dietze, Michael C; Lebauer, David S; Kooper, Rob

    2013-09-01

    The potential for model-data synthesis is growing in importance as we enter an era of 'big data', greater connectivity and faster computation. Realizing this potential requires that the research community broaden its perspective about how and why they interact with models. Models can be viewed as scaffolds that allow data at different scales to inform each other through our understanding of underlying processes. Perceptions of relevance, accessibility and informatics are presented as the primary barriers to broader adoption of models by the community, while an inability to fully utilize the breadth of expertise and data from the community is a primary barrier to model improvement. Overall, we promote a community-based paradigm to model-data synthesis and highlight some of the tools and techniques that facilitate this approach. Scientific workflows address critical informatics issues in transparency, repeatability and automation, while intuitive, flexible web-based interfaces make running and visualizing models more accessible. Bayesian statistics provides powerful tools for assimilating a diversity of data types and for the analysis of uncertainty. Uncertainty analyses enable new measurements to target those processes most limiting our predictive ability. Moving forward, tools for information management and data assimilation need to be improved and made more accessible. © 2013 John Wiley & Sons Ltd.

  1. Improved dual sided doped memristor: modelling and applications

    Directory of Open Access Journals (Sweden)

    Anup Shrivastava

    2014-05-01

    Full Text Available Memristor as a novel and emerging electronic device having vast range of applications suffer from poor frequency response and saturation length. In this paper, the authors present a novel and an innovative device structure for the memristor with two active layers and its non-linear ionic drift model for an improved frequency response and saturation length. The authors investigated and compared the I–V characteristics for the proposed model with the conventional memristors and found better results in each case (different window functions for the proposed dual sided doped memristor. For circuit level simulation, they developed a SPICE model of the proposed memristor and designed some logic gates based on hybrid complementary metal oxide semiconductor memristive logic (memristor ratioed logic. The proposed memristor yields improved results in terms of noise margin, delay time and dynamic hazards than that of the conventional memristors (single active layer memristors.

  2. Improvements on Semi-Classical Distorted-Wave model

    Energy Technology Data Exchange (ETDEWEB)

    Sun Weili; Watanabe, Y.; Kuwata, R. [Kyushu Univ., Fukuoka (Japan); Kohno, M.; Ogata, K.; Kawai, M.

    1998-03-01

    A method of improving the Semi-Classical Distorted Wave (SCDW) model in terms of the Wigner transform of the one-body density matrix is presented. Finite size effect of atomic nuclei can be taken into account by using the single particle wave functions for harmonic oscillator or Wood-Saxon potential, instead of those based on the local Fermi-gas model which were incorporated into previous SCDW model. We carried out a preliminary SCDW calculation of 160 MeV (p,p`x) reaction on {sup 90}Zr with the Wigner transform of harmonic oscillator wave functions. It is shown that the present calculation of angular distributions increase remarkably at backward angles than the previous ones and the agreement with the experimental data is improved. (author)

  3. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    Science.gov (United States)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  4. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.; Smith, C.L.; Rasmuson, D.M.

    1994-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methodology, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  5. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    1995-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  6. An Improved Nonlinear Five-Point Model for Photovoltaic Modules

    Directory of Open Access Journals (Sweden)

    Sakaros Bogning Dongue

    2013-01-01

    Full Text Available This paper presents an improved nonlinear five-point model capable of analytically describing the electrical behaviors of a photovoltaic module for each generic operating condition of temperature and solar irradiance. The models used to replicate the electrical behaviors of operating PV modules are usually based on some simplified assumptions which provide convenient mathematical model which can be used in conventional simulation tools. Unfortunately, these assumptions cause some inaccuracies, and hence unrealistic economic returns are predicted. As an alternative, we used the advantages of a nonlinear analytical five-point model to take into account the nonideal diode effects and nonlinear effects generally ignored, which PV modules operation depends on. To verify the capability of our method to fit PV panel characteristics, the procedure was tested on three different panels. Results were compared with the data issued by manufacturers and with the results obtained using the five-parameter model proposed by other authors.

  7. Estimating Evapotranspiration from an Improved Two-Source Energy Balance Model Using ASTER Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Qifeng Zhuang

    2015-11-01

    Full Text Available Reliably estimating the turbulent fluxes of latent and sensible heat at the Earth’s surface by remote sensing is important for research on the terrestrial hydrological cycle. This paper presents a practical approach for mapping surface energy fluxes using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER images from an improved two-source energy balance (TSEB model. The original TSEB approach may overestimate latent heat flux under vegetative stress conditions, as has also been reported in recent research. We replaced the Priestley-Taylor equation used in the original TSEB model with one that uses plant moisture and temperature constraints based on the PT-JPL model to obtain a more accurate canopy latent heat flux for model solving. The collected ASTER data and field observations employed in this study are over corn fields in arid regions of the Heihe Watershed Allied Telemetry Experimental Research (HiWATER area, China. The results were validated by measurements from eddy covariance (EC systems, and the surface energy flux estimates of the improved TSEB model are similar to the ground truth. A comparison of the results from the original and improved TSEB models indicates that the improved method more accurately estimates the sensible and latent heat fluxes, generating more precise daily evapotranspiration (ET estimate under vegetative stress conditions.

  8. Improvement of blow down model for LEAP code

    International Nuclear Information System (INIS)

    Itooka, Satoshi; Fujimata, Kazuhiro

    2003-03-01

    In Japan Nuclear Cycle Development Institute, the improvement of analysis method for overheating tube rapture was studied for the accident of sodium-water reactions in the steam generator of a fast breeder reactor and the evaluation of heat transfer condition in the tube were carried out based on study of critical heat flux (CHF) and post-CHF heat transfer equation in Light Water Reactors. In this study, the improvement of blow down model for the LEAP code was carried out taking into consideration the above-mentioned evaluation of heat transfer condition. Improvements of the LEAP code were following items. Calculations and verification were performed with the improved LEAP code in order to confirm the code functions. The addition of critical heat flux (CHF) by the formula of Katto and the formula of Tong. The addition of post-CHF heat transfer equation by the formula of Condie-BengstonIV and the formula of Groeneveld 5.9. The physical properties of the water and steam are expanded to the critical conditions of the water. The expansion of the total number of section and the improvement of the input form. The addition of the function to control the valve setting by the PID control model. (author)

  9. Renewing the Respect for Similarity

    Directory of Open Access Journals (Sweden)

    Shimon eEdelman

    2012-07-01

    Full Text Available In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemmingfrom its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problemat hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, bysurveying established results and new developments in the theory and methods of similarity-preservingassociative lookup and dimensionality reduction — critical components of many cognitive functions, aswell as of intelligent data management in computer vision. We focus in particular on the growing familyof algorithms that support associative memory by performing hashing that respects local similarity, andon the uses of similarity in representing structured objects and scenes. Insofar as these similarity-basedideas and methods are useful in cognitive modeling and in AI applications, they should be included inthe core conceptual toolkit of computational neuroscience.

  10. An improved steam generator model for the SASSYS code

    International Nuclear Information System (INIS)

    Pizzica, P.A.

    1989-01-01

    A new steam generator model has been developed for the SASSYS computer code, which analyzes accident conditions in a liquid metal cooled fast reactor. It has been incorporated into the new SASSYS balance-of-plant model but it can also function on a stand-alone basis. The steam generator can be used in a once-through mode, or a variant of the model can be used as a separate evaporator and a superheater with recirculation loop. The new model provides for an exact steady-state solution as well as the transient calculation. There was a need for a faster and more flexible model than the old steam generator model. The new model provides for more detail with its multi-mode treatment as opposed to the previous model's one node per region approach. Numerical instability problems which were the result of cell-centered spatial differencing, fully explicit time differencing, and the moving boundary treatment of the boiling crisis point in the boiling region have been reduced. This leads to an increase in speed as larger time steps can now be taken. The new model is an improvement in many respects. 2 refs., 3 figs

  11. An improved active contour model for glacial lake extraction

    Science.gov (United States)

    Zhao, H.; Chen, F.; Zhang, M.

    2017-12-01

    Active contour model is a widely used method in visual tracking and image segmentation. Under the driven of objective function, the initial curve defined in active contour model will evolve to a stable condition - a desired result in given image. As a typical region-based active contour model, C-V model has a good effect on weak boundaries detection and anti noise ability which shows great potential in glacial lake extraction. Glacial lake is a sensitive indicator for reflecting global climate change, therefore accurate delineate glacial lake boundaries is essential to evaluate hydrologic environment and living environment. However, the current method in glacial lake extraction mainly contains water index method and recognition classification method are diffcult to directly applied in large scale glacial lake extraction due to the diversity of glacial lakes and masses impacted factors in the image, such as image noise, shadows, snow and ice, etc. Regarding the abovementioned advantanges of C-V model and diffcults in glacial lake extraction, we introduce the signed pressure force function to improve the C-V model for adapting to processing of glacial lake extraction. To inspect the effect of glacial lake extraction results, three typical glacial lake development sites were selected, include Altai mountains, Centre Himalayas, South-eastern Tibet, and Landsat8 OLI imagery was conducted as experiment data source, Google earth imagery as reference data for varifying the results. The experiment consequence suggests that improved active contour model we proposed can effectively discriminate the glacial lakes from complex backgound with a higher Kappa Coefficient - 0.895, especially in some small glacial lakes which belongs to weak information in the image. Our finding provide a new approach to improved accuracy under the condition of large proportion of small glacial lakes and the possibility for automated glacial lake mapping in large-scale area.

  12. MODEL OF IMPROVING ENVIRONMENTAL MANAGEMENT SYSTEM BY MULTI - SOFTWARE

    Directory of Open Access Journals (Sweden)

    Jelena Jovanovic

    2009-03-01

    Full Text Available This paper is based on doctoral dissertation which is oriented on improving environmental management system using multi - software. In this doctoral dissertation will be used key results of master thesis which is oriented on quantification environmental aspects and impacts by artificial neural network in organizations. This paper recommend improving environmental management system in organization using Balanced scorecard model and MCDM method - AHP (Analytic hierarchy process based on group decision. BSC would be spread with elements of Environmental management system and used in area of strategic management system in organization and AHP would be used in area of checking results getting by quantification environmental aspects and impacts.

  13. Recent Improvements to the Calibration Models for RXTE/PCA

    Science.gov (United States)

    Jahoda, K.

    2008-01-01

    We are updating the calibration of the PCA to correct for slow variations, primarily in energy to channel relationship. We have also improved the physical model in the vicinity of the Xe K-edge, which should increase the reliability of continuum fits above 20 keV. The improvements to the matrix are especially important to simultaneous observations, where the PCA is often used to constrain the continuum while other higher resolution spectrometers are used to study the shape of lines and edges associated with Iron.

  14. Personalized recommendation with corrected similarity

    International Nuclear Information System (INIS)

    Zhu, Xuzhen; Tian, Hui; Cai, Shimin

    2014-01-01

    Personalized recommendation has attracted a surge of interdisciplinary research. Especially, similarity-based methods in applications of real recommendation systems have achieved great success. However, the computations of similarities are overestimated or underestimated, in particular because of the defective strategy of unidirectional similarity estimation. In this paper, we solve this drawback by leveraging mutual correction of forward and backward similarity estimations, and propose a new personalized recommendation index, i.e., corrected similarity based inference (CSI). Through extensive experiments on four benchmark datasets, the results show a greater improvement of CSI in comparison with these mainstream baselines. And a detailed analysis is presented to unveil and understand the origin of such difference between CSI and mainstream indices. (paper)

  15. Modeling task-specific neuronal ensembles improves decoding of grasp

    Science.gov (United States)

    Smith, Ryan J.; Soares, Alcimar B.; Rouse, Adam G.; Schieber, Marc H.; Thakor, Nitish V.

    2018-06-01

    Objective. Dexterous movement involves the activation and coordination of networks of neuronal populations across multiple cortical regions. Attempts to model firing of individual neurons commonly treat the firing rate as directly modulating with motor behavior. However, motor behavior may additionally be associated with modulations in the activity and functional connectivity of neurons in a broader ensemble. Accounting for variations in neural ensemble connectivity may provide additional information about the behavior being performed. Approach. In this study, we examined neural ensemble activity in primary motor cortex (M1) and premotor cortex (PM) of two male rhesus monkeys during performance of a center-out reach, grasp and manipulate task. We constructed point process encoding models of neuronal firing that incorporated task-specific variations in the baseline firing rate as well as variations in functional connectivity with the neural ensemble. Models were evaluated both in terms of their encoding capabilities and their ability to properly classify the grasp being performed. Main results. Task-specific ensemble models correctly predicted the performed grasp with over 95% accuracy and were shown to outperform models of neuronal activity that assume only a variable baseline firing rate. Task-specific ensemble models exhibited superior decoding performance in 82% of units in both monkeys (p  <  0.01). Inclusion of ensemble activity also broadly improved the ability of models to describe observed spiking. Encoding performance of task-specific ensemble models, measured by spike timing predictability, improved upon baseline models in 62% of units. Significance. These results suggest that additional discriminative information about motor behavior found in the variations in functional connectivity of neuronal ensembles located in motor-related cortical regions is relevant to decode complex tasks such as grasping objects, and may serve the basis for more

  16. Using Patient Health Questionnaire-9 item parameters of a common metric resulted in similar depression scores compared to independent item response theory model reestimation.

    Science.gov (United States)

    Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix

    2016-03-01

    To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Biodiversity and Climate Modeling Workshop Series: Identifying gaps and needs for improving large-scale biodiversity models

    Science.gov (United States)

    Weiskopf, S. R.; Myers, B.; Beard, T. D.; Jackson, S. T.; Tittensor, D.; Harfoot, M.; Senay, G. B.

    2017-12-01

    At the global scale, well-accepted global circulation models and agreed-upon scenarios for future climate from the Intergovernmental Panel on Climate Change (IPCC) are available. In contrast, biodiversity modeling at the global scale lacks analogous tools. While there is great interest in development of similar bodies and efforts for international monitoring and modelling of biodiversity at the global scale, equivalent modelling tools are in their infancy. This lack of global biodiversity models compared to the extensive array of general circulation models provides a unique opportunity to bring together climate, ecosystem, and biodiversity modeling experts to promote development of integrated approaches in modeling global biodiversity. Improved models are needed to understand how we are progressing towards the Aichi Biodiversity Targets, many of which are not on track to meet the 2020 goal, threatening global biodiversity conservation, monitoring, and sustainable use. We brought together biodiversity, climate, and remote sensing experts to try to 1) identify lessons learned from the climate community that can be used to improve global biodiversity models; 2) explore how NASA and other remote sensing products could be better integrated into global biodiversity models and 3) advance global biodiversity modeling, prediction, and forecasting to inform the Aichi Biodiversity Targets, the 2030 Sustainable Development Goals, and the Intergovernmental Platform on Biodiversity and Ecosystem Services Global Assessment of Biodiversity and Ecosystem Services. The 1st In-Person meeting focused on determining a roadmap for effective assessment of biodiversity model projections and forecasts by 2030 while integrating and assimilating remote sensing data and applying lessons learned, when appropriate, from climate modeling. Here, we present the outcomes and lessons learned from our first E-discussion and in-person meeting and discuss the next steps for future meetings.

  18. Molecular similarity measures.

    Science.gov (United States)

    Maggiora, Gerald M; Shanmugasundaram, Veerabahu

    2011-01-01

    Molecular similarity is a pervasive concept in chemistry. It is essential to many aspects of chemical reasoning and analysis and is perhaps the fundamental assumption underlying medicinal chemistry. Dissimilarity, the complement of similarity, also plays a major role in a growing number of applications of molecular diversity in combinatorial chemistry, high-throughput screening, and related fields. How molecular information is represented, called the representation problem, is important to the type of molecular similarity analysis (MSA) that can be carried out in any given situation. In this work, four types of mathematical structure are used to represent molecular information: sets, graphs, vectors, and functions. Molecular similarity is a pairwise relationship that induces structure into sets of molecules, giving rise to the concept of chemical space. Although all three concepts - molecular similarity, molecular representation, and chemical space - are treated in this chapter, the emphasis is on molecular similarity measures. Similarity measures, also called similarity coefficients or indices, are functions that map pairs of compatible molecular representations that are of the same mathematical form into real numbers usually, but not always, lying on the unit interval. This chapter presents a somewhat pedagogical discussion of many types of molecular similarity measures, their strengths and limitations, and their relationship to one another. An expanded account of the material on chemical spaces presented in the first edition of this book is also provided. It includes a discussion of the topography of activity landscapes and the role that activity cliffs in these landscapes play in structure-activity studies.

  19. Alterations in endo-lysosomal function induce similar hepatic lipid profiles in rodent models of drug-induced phospholipidosis and Sandhoff disease.

    Science.gov (United States)

    Lecommandeur, Emmanuelle; Baker, David; Cox, Timothy M; Nicholls, Andrew W; Griffin, Julian L

    2017-07-01

    Drug-induced phospholipidosis (DIPL) is characterized by an increase in the phospholipid content of the cell and the accumulation of drugs and lipids inside the lysosomes of affected tissues, including in the liver. Although of uncertain pathological significance for patients, the condition remains a major impediment for the clinical development of new drugs. Human Sandhoff disease (SD) is caused by inherited defects of the β subunit of lysosomal β-hexosaminidases (Hex) A and B, leading to a large array of symptoms, including neurodegeneration and ultimately death by the age of 4 in its most common form. The substrates of Hex A and B, gangliosides GM2 and GA2, accumulate inside the lysosomes of the CNS and in peripheral organs. Given that both DIPL and SD are associated with lysosomes and lipid metabolism in general, we measured the hepatic lipid profiles in rodent models of these two conditions using untargeted LC/MS to examine potential commonalities. Both model systems shared a number of perturbed lipid pathways, notably those involving metabolism of cholesteryl esters, lysophosphatidylcholines, bis(monoacylglycero)phosphates, and ceramides. We report here profound alterations in lipid metabolism in the SD liver. In addition, DIPL induced a wide range of lipid changes not previously observed in the liver, highlighting similarities with those detected in the model of SD and raising concerns that these lipid changes may be associated with underlying pathology associated with lysosomal storage disorders. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.

  20. Improved animal models for testing gene therapy for atherosclerosis.

    Science.gov (United States)

    Du, Liang; Zhang, Jingwan; De Meyer, Guido R Y; Flynn, Rowan; Dichek, David A

    2014-04-01

    Gene therapy delivered to the blood vessel wall could augment current therapies for atherosclerosis, including systemic drug therapy and stenting. However, identification of clinically useful vectors and effective therapeutic transgenes remains at the preclinical stage. Identification of effective vectors and transgenes would be accelerated by availability of animal models that allow practical and expeditious testing of vessel-wall-directed gene therapy. Such models would include humanlike lesions that develop rapidly in vessels that are amenable to efficient gene delivery. Moreover, because human atherosclerosis develops in normal vessels, gene therapy that prevents atherosclerosis is most logically tested in relatively normal arteries. Similarly, gene therapy that causes atherosclerosis regression requires gene delivery to an existing lesion. Here we report development of three new rabbit models for testing vessel-wall-directed gene therapy that either prevents or reverses atherosclerosis. Carotid artery intimal lesions in these new models develop within 2-7 months after initiation of a high-fat diet and are 20-80 times larger than lesions in a model we described previously. Individual models allow generation of lesions that are relatively rich in either macrophages or smooth muscle cells, permitting testing of gene therapy strategies targeted at either cell type. Two of the models include gene delivery to essentially normal arteries and will be useful for identifying strategies that prevent lesion development. The third model generates lesions rapidly in vector-naïve animals and can be used for testing gene therapy that promotes lesion regression. These models are optimized for testing helper-dependent adenovirus (HDAd)-mediated gene therapy; however, they could be easily adapted for testing of other vectors or of different types of molecular therapies, delivered directly to the blood vessel wall. Our data also supports the promise of HDAd to deliver long

  1. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    Science.gov (United States)

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  2. Improvement for Amelioration Inventory Model with Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Han-Wen Tuan

    2017-01-01

    Full Text Available Most inventory models dealt with deteriorated items. On the contrary, just a few papers considered inventory systems under amelioration environment. We study an amelioration inventory model with Weibull distribution. However, there are some questionable results in the amelioration paper. We will first point out those questionable results in the previous paper that did not derive the optimal solution and then provide some improvements. We will provide a rigorous analytical work for different cases dependent on the size of the shape parameter. We present a detailed numerical example for different ranges of the sharp parameter to illustrate that our solution method attains the optimal solution. We developed a new amelioration model and then provided a detailed analyzed procedure to find the optimal solution. Our findings will help researchers develop their new inventory models.

  3. PSA Model Improvement Using Maintenance Rule Function Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Mi Ro [KHNP-CRI, Nuclear Safety Laboratory, Daejeon (Korea, Republic of)

    2011-10-15

    The Maintenance Rule (MR) program, in nature, is a performance-based program. Therefore, the risk information derived from the Probabilistic Safety Assessment model is introduced into the MR program during the Safety Significance determination and Performance Criteria selection processes. However, this process also facilitates the determination of the vulnerabilities in currently utilized PSA models and offers means of improving them. To find vulnerabilities in an existing PSA model, an initial review determines whether the safety-related MR functions are included in the PSA model. Because safety-related MR functions are related to accident prevention and mitigation, it is generally necessary for them to be included in the PSA model. In the process of determining the safety significance of each functions, quantitative risk importance levels are determined through a process known as PSA model basic event mapping to MR functions. During this process, it is common for some inadequate and overlooked models to be uncovered. In this paper, the PSA model and the MR program of Wolsong Unit 1 were used as references

  4. IMPROVEMENT OF MATHEMATICAL MODELS FOR ESTIMATION OF TRAIN DYNAMICS

    Directory of Open Access Journals (Sweden)

    L. V. Ursulyak

    2017-12-01

    Full Text Available Purpose. Using scientific publications the paper analyzes the mathematical models developed in Ukraine, CIS countries and abroad for theoretical studies of train dynamics and also shows the urgency of their further improvement. Methodology. Information base of the research was official full-text and abstract databases, scientific works of domestic and foreign scientists, professional periodicals, materials of scientific and practical conferences, methodological materials of ministries and departments. Analysis of publications on existing mathematical models used to solve a wide range of problems associated with the train dynamics study shows the expediency of their application. Findings. The results of these studies were used in: 1 design of new types of draft gears and air distributors; 2 development of methods for controlling the movement of conventional and connected trains; 3 creation of appropriate process flow diagrams; 4 development of energy-saving methods of train driving; 5 revision of the Construction Codes and Regulations (SNiP ΙΙ-39.76; 6 when selecting the parameters of the autonomous automatic control system, created in DNURT, for an auxiliary locomotive that is part of a connected train; 7 when creating computer simulators for the training of locomotive drivers; 8 assessment of the vehicle dynamic indices characterizing traffic safety. Scientists around the world conduct numerical experiments related to estimation of train dynamics using mathematical models that need to be constantly improved. Originality. The authors presented the main theoretical postulates that allowed them to develop the existing mathematical models for solving problems related to the train dynamics. The analysis of scientific articles published in Ukraine, CIS countries and abroad allows us to determine the most relevant areas of application of mathematical models. Practicalvalue. The practical value of the results obtained lies in the scientific validity

  5. Do number of days with low back pain and patterns of episodes of pain have similar outcomes in a biopsychosocial prediction model?

    DEFF Research Database (Denmark)

    Lemeunier, N; Leboeuf-Yde, C; Gagey, O

    2016-01-01

    are similar, regardless which of the two classifications is used. METHOD: During 1 year, 49- or 50-year-old people from the Danish general population were sent fortnightly automated text messages (SMS-Track) asking them if they had any LBP in the past fortnight. Responses for the whole year were......PURPOSES: We used two different methods to classify low back pain (LBP) in the general population (1) to assess the overlapping of individuals within the different subgroups in those two classifications, (2) to explore if the associations between LBP and some selected bio-psychosocial factors...... with a questionnaire at baseline 9 years earlier, were entered into regression models to investigate their associations with the subgroups of the two classifications of LBP and the results compared. RESULTS: The percentage of agreement between categories of the two classification systems was above 68 % (Kappa 0...

  6. Factors Affecting Choice in A Multi-Stage Model: The Influence of Saliency and Similarity on Retrieval Set and the Implication of Context Effect on Consideration Set

    Directory of Open Access Journals (Sweden)

    Eric Santosa

    2009-09-01

    Full Text Available While it is considered a new paradigm in consumer research, the multi-stage model of consumer decision-making remains unclear as to whether brands are easily retrieved. Likewise, the process of consideration, after particular brands are successfully retrieved, is still in question. This study purports to investigate the effects of saliency and similarity on the ease of retrieval. In addition, referring to some studies of context effect, the effects of attraction, compromise, and assimilation are examined to observe whether they contribute to consideration. A within-subject design is employed in this study. Previously, three preliminary studies are arranged to determine the dominants, new entrants, attributes, and other criteria nominated in the experimental study. The results turn out to be supporting the hypotheses.

  7. Active surface model improvement by energy function optimization for 3D segmentation.

    Science.gov (United States)

    Azimifar, Zohreh; Mohaddesi, Mahsa

    2015-04-01

    This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  9. Model improvements for tritium transport in DEMO fuel cycle

    Energy Technology Data Exchange (ETDEWEB)

    Santucci, Alessia, E-mail: alessia.santucci@enea.it [Unità Tecnica Fusione – ENEA C. R. Frascati, Via E. Fermi 45, 00044 Frascati (Roma) (Italy); Tosti, Silvano [Unità Tecnica Fusione – ENEA C. R. Frascati, Via E. Fermi 45, 00044 Frascati (Roma) (Italy); Franza, Fabrizio [Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany)

    2015-10-15

    Highlights: • T inventory and permeation of DEMO blankets have been assessed under pulsed operation. • 1-D model for T transport has been developed for the HCLL DEMO blanket. • The 1-D model evaluated T partial pressure and T permeation rate radial profiles. - Abstract: DEMO operation requires a large amount of tritium, which is directly produced inside the reactor by means of Li-based breeders. During its production, recovering and purification, tritium comes in contact with large surfaces of hot metallic walls, therefore it can permeate through the blanket cooling structure, reach the steam generator and finally the environment. The development of dedicated simulation tools able to predict tritium losses and inventories is necessary to verify the accomplishment of the accepted tritium environmental releases as well as to guarantee a correct machine operation. In this work, the FUS-TPC code is improved by including the possibility to operate in pulsed regime: results in terms of tritium inventory and losses for three pulsed scenarios are shown. Moreover, the development of a 1-D model considering the radial profile of the tritium generation is described. By referring to the inboard segment on the equatorial axis of the helium-cooled lithium–lead (HCLL) blanket, preliminary results of the 1-D model are illustrated: tritium partial pressure in Li–Pb and tritium permeation in the cooling and stiffening plates by assuming several permeation reduction factor (PRF) values. Future improvements will consider the application of the model to all segments of different blanket concepts.

  10. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  11. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    Science.gov (United States)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  12. An improved thermal model for the computer code NAIAD

    International Nuclear Information System (INIS)

    Rainbow, M.T.

    1982-12-01

    An improved thermal model, based on the concept of heat slabs, has been incorporated as an option into the thermal hydraulic computer code NAIAD. The heat slabs are one-dimensional thermal conduction models with temperature independent thermal properties which may be internal and/or external to the fluid. Thermal energy may be added to or removed from the fluid via heat slabs and passed across the external boundary of external heat slabs at a rate which is a linear function of the external surface temperatures. The code input for the new option has been restructured to simplify data preparation. A full description of current input requirements is presented

  13. Process-Improvement Cost Model for the Emergency Department.

    Science.gov (United States)

    Dyas, Sheila R; Greenfield, Eric; Messimer, Sherri; Thotakura, Swati; Gholston, Sampson; Doughty, Tracy; Hays, Mary; Ivey, Richard; Spalding, Joseph; Phillips, Robin

    2015-01-01

    The objective of this report is to present a simplified, activity-based costing approach for hospital emergency departments (EDs) to use with Lean Six Sigma cost-benefit analyses. The cost model complexity is reduced by removing diagnostic and condition-specific costs, thereby revealing the underlying process activities' cost inefficiencies. Examples are provided for evaluating the cost savings from reducing discharge delays and the cost impact of keeping patients in the ED (boarding) after the decision to admit has been made. The process-improvement cost model provides a needed tool in selecting, prioritizing, and validating Lean process-improvement projects in the ED and other areas of patient care that involve multiple dissimilar diagnoses.

  14. Hybrid modeling approach to improve the forecasting capability for the gaseous radionuclide in a nuclear site

    International Nuclear Information System (INIS)

    Jeong, Hyojoon; Hwang, Wontae; Kim, Eunhan; Han, Moonhee

    2012-01-01

    Highlights: ► This study is to improve the reliability of air dispersion modeling. ► Tracer experiments assumed gaseous radionuclides were conducted at a nuclear site. ► The performance of a hybrid modeling combined ISC with ANFIS was investigated.. ► Hybrid modeling approach shows better performance rather than a single ISC model. - Abstract: Predicted air concentrations of radioactive materials are important for an environmental impact assessment for the public health. In this study, the performance of a hybrid modeling combined with the industrial source complex (ISC) model and an adaptive neuro-fuzzy inference system (ANFIS) for predicting tracer concentrations was investigated. Tracer dispersion experiments were performed to produce the field data assuming the accidental release of radioactive material. ANFIS was trained in order that the outputs of the ISC model are similar to the measured data. Judging from the higher correlation coefficients between the measured and the calculated ones, the hybrid modeling approach could be an appropriate technique for an improvement of the modeling capability to predict the air concentrations for radioactive materials.

  15. Improved Statistical Model Of 10.7-cm Solar Radiation

    Science.gov (United States)

    Vedder, John D.; Tabor, Jill L.

    1993-01-01

    Improved mathematical model simulates short-term fluctuations of flux of 10.7-cm-wavelength solar radiation during 91-day averaging period. Called "F10.7 flux", important as measure of solar activity and because it is highly correlated with ultraviolet radiation causing fluctuations in heating and density of upper atmosphere. F10.7 flux easily measureable at surface of Earth.

  16. Coastal Improvements for Tide Models: The Impact of ALES Retracker

    Directory of Open Access Journals (Sweden)

    Gaia Piccioni

    2018-05-01

    Full Text Available Since the launch of the first altimetry satellites, ocean tide models have been improved dramatically for deep and shallow waters. However, issues are still found for areas of great interest for climate change investigations: the coastal regions. The purpose of this study is to analyze the influence of the ALES coastal retracker on tide modeling in these regions with respect to a standard open ocean retracker. The approach used to compute the tidal constituents is an updated and along-track version of the Empirical Ocean Tide model developed at DGFI-TUM. The major constituents are derived from a least-square harmonic analysis of sea level residuals based on the FES2014 tide model. The results obtained with ALES are compared with the ones estimated with the standard product. A lower fitting error is found for the ALES solution, especially for distances closer than 20 km from the coast. In comparison with in situ data, the root mean squared error computed with ALES can reach an improvement larger than 2 cm at single locations, with an average impact of over 10% for tidal constituents K 2 , O 1 , and P 1 . For Q 1 , the improvement is over 25%. It was observed that improvements to the root-sum squares are larger for distances closer than 10 km to the coast, independently on the sea state. Finally, the performance of the solutions changes according to the satellite’s flight direction: for tracks approaching land from open ocean root mean square differences larger than 1 cm are found in comparison to tracks going from land to ocean.

  17. An Effective Model for Improving Global Health Nursing Competence

    OpenAIRE

    Sunjoo Kang

    2016-01-01

    This paper proposed an effective model for improving global health nursing competence among undergraduate students. A descriptive case study was conducted by evaluation of four implemented programs by the author. All programs were conducted with students majoring in nursing and healthcare, where the researcher was a program director, professor, or facilitator. These programs were analyzed in terms of students’ needs assessment, program design, and implementation and evaluation factors. The co...

  18. Similarity Measure of Graphs

    Directory of Open Access Journals (Sweden)

    Amine Labriji

    2017-07-01

    Full Text Available The topic of identifying the similarity of graphs was considered as highly recommended research field in the Web semantic, artificial intelligence, the shape recognition and information research. One of the fundamental problems of graph databases is finding similar graphs to a graph query. Existing approaches dealing with this problem are usually based on the nodes and arcs of the two graphs, regardless of parental semantic links. For instance, a common connection is not identified as being part of the similarity of two graphs in cases like two graphs without common concepts, the measure of similarity based on the union of two graphs, or the one based on the notion of maximum common sub-graph (SCM, or the distance of edition of graphs. This leads to an inadequate situation in the context of information research. To overcome this problem, we suggest a new measure of similarity between graphs, based on the similarity measure of Wu and Palmer. We have shown that this new measure satisfies the properties of a measure of similarities and we applied this new measure on examples. The results show that our measure provides a run time with a gain of time compared to existing approaches. In addition, we compared the relevance of the similarity values obtained, it appears that this new graphs measure is advantageous and  offers a contribution to solving the problem mentioned above.

  19. Processes of Similarity Judgment

    Science.gov (United States)

    Larkey, Levi B.; Markman, Arthur B.

    2005-01-01

    Similarity underlies fundamental cognitive capabilities such as memory, categorization, decision making, problem solving, and reasoning. Although recent approaches to similarity appreciate the structure of mental representations, they differ in the processes posited to operate over these representations. We present an experiment that…

  20. Judgments of brand similarity

    NARCIS (Netherlands)

    Bijmolt, THA; Wedel, M; Pieters, RGM; DeSarbo, WS

    This paper provides empirical insight into the way consumers make pairwise similarity judgments between brands, and how familiarity with the brands, serial position of the pair in a sequence, and the presentation format affect these judgments. Within the similarity judgment process both the

  1. An improved experimental model for peripheral neuropathy in rats

    Directory of Open Access Journals (Sweden)

    Q.M. Dias

    2013-03-01

    Full Text Available A modification of the Bennett and Xie chronic constriction injury model of peripheral painful neuropathy was developed in rats. Under tribromoethanol anesthesia, a single ligature with 100% cotton glace thread was placed around the right sciatic nerve proximal to its trifurcation. The change in the hind paw reflex threshold after mechanical stimulation observed with this modified model was compared to the change in threshold observed in rats subjected to the Bennett and Xie or the Kim and Chung spinal ligation models. The mechanical threshold was measured with an automated electronic von Frey apparatus 0, 2, 7, and 14 days after surgery, and this threshold was compared to that measured in sham rats. All injury models produced significant hyperalgesia in the operated hind limb. The modified model produced mean ± SD thresholds in g (19.98 ± 3.08, 14.98 ± 1.86, and 13.80 ± 1.00 at 2, 7, and 14 days after surgery, respectively similar to those obtained with the spinal ligation model (20.03 ± 1.99, 13.46 ± 2.55, and 12.46 ± 2.38 at 2, 7, and 14 days after surgery, respectively, but less variable when compared to the Bennett and Xie model (21.20 ± 8.06, 18.61 ± 7.69, and 18.76 ± 6.46 at 2, 7, and 14 days after surgery, respectively. The modified method required less surgical skill than the spinal nerve ligation model.

  2. An improved experimental model for peripheral neuropathy in rats

    International Nuclear Information System (INIS)

    Dias, Q.M.; Rossaneis, A.C.; Fais, R.S.; Prado, W.A.

    2013-01-01

    A modification of the Bennett and Xie chronic constriction injury model of peripheral painful neuropathy was developed in rats. Under tribromoethanol anesthesia, a single ligature with 100% cotton glace thread was placed around the right sciatic nerve proximal to its trifurcation. The change in the hind paw reflex threshold after mechanical stimulation observed with this modified model was compared to the change in threshold observed in rats subjected to the Bennett and Xie or the Kim and Chung spinal ligation models. The mechanical threshold was measured with an automated electronic von Frey apparatus 0, 2, 7, and 14 days after surgery, and this threshold was compared to that measured in sham rats. All injury models produced significant hyperalgesia in the operated hind limb. The modified model produced mean ± SD thresholds in g (19.98 ± 3.08, 14.98 ± 1.86, and 13.80 ± 1.00 at 2, 7, and 14 days after surgery, respectively) similar to those obtained with the spinal ligation model (20.03 ± 1.99, 13.46 ± 2.55, and 12.46 ± 2.38 at 2, 7, and 14 days after surgery, respectively), but less variable when compared to the Bennett and Xie model (21.20 ± 8.06, 18.61 ± 7.69, and 18.76 ± 6.46 at 2, 7, and 14 days after surgery, respectively). The modified method required less surgical skill than the spinal nerve ligation model

  3. An improved experimental model for peripheral neuropathy in rats

    Directory of Open Access Journals (Sweden)

    Q.M. Dias

    Full Text Available A modification of the Bennett and Xie chronic constriction injury model of peripheral painful neuropathy was developed in rats. Under tribromoethanol anesthesia, a single ligature with 100% cotton glace thread was placed around the right sciatic nerve proximal to its trifurcation. The change in the hind paw reflex threshold after mechanical stimulation observed with this modified model was compared to the change in threshold observed in rats subjected to the Bennett and Xie or the Kim and Chung spinal ligation models. The mechanical threshold was measured with an automated electronic von Frey apparatus 0, 2, 7, and 14 days after surgery, and this threshold was compared to that measured in sham rats. All injury models produced significant hyperalgesia in the operated hind limb. The modified model produced mean ± SD thresholds in g (19.98 ± 3.08, 14.98 ± 1.86, and 13.80 ± 1.00 at 2, 7, and 14 days after surgery, respectively similar to those obtained with the spinal ligation model (20.03 ± 1.99, 13.46 ± 2.55, and 12.46 ± 2.38 at 2, 7, and 14 days after surgery, respectively, but less variable when compared to the Bennett and Xie model (21.20 ± 8.06, 18.61 ± 7.69, and 18.76 ± 6.46 at 2, 7, and 14 days after surgery, respectively. The modified method required less surgical skill than the spinal nerve ligation model.

  4. An improved experimental model for peripheral neuropathy in rats

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Q.M.; Rossaneis, A.C.; Fais, R.S.; Prado, W.A. [Departamento de Farmacologia, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP (Brazil)

    2013-03-15

    A modification of the Bennett and Xie chronic constriction injury model of peripheral painful neuropathy was developed in rats. Under tribromoethanol anesthesia, a single ligature with 100% cotton glace thread was placed around the right sciatic nerve proximal to its trifurcation. The change in the hind paw reflex threshold after mechanical stimulation observed with this modified model was compared to the change in threshold observed in rats subjected to the Bennett and Xie or the Kim and Chung spinal ligation models. The mechanical threshold was measured with an automated electronic von Frey apparatus 0, 2, 7, and 14 days after surgery, and this threshold was compared to that measured in sham rats. All injury models produced significant hyperalgesia in the operated hind limb. The modified model produced mean ± SD thresholds in g (19.98 ± 3.08, 14.98 ± 1.86, and 13.80 ± 1.00 at 2, 7, and 14 days after surgery, respectively) similar to those obtained with the spinal ligation model (20.03 ± 1.99, 13.46 ± 2.55, and 12.46 ± 2.38 at 2, 7, and 14 days after surgery, respectively), but less variable when compared to the Bennett and Xie model (21.20 ± 8.06, 18.61 ± 7.69, and 18.76 ± 6.46 at 2, 7, and 14 days after surgery, respectively). The modified method required less surgical skill than the spinal nerve ligation model.

  5. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  6. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    Science.gov (United States)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  7. Estimating the surface layer refractive index structure constant over snow and sea ice using Monin-Obukhov similarity theory with a mesoscale atmospheric model.

    Science.gov (United States)

    Qing, Chun; Wu, Xiaoqing; Huang, Honghua; Tian, Qiguo; Zhu, Wenyue; Rao, Ruizhong; Li, Xuebin

    2016-09-05

    Since systematic direct measurements of refractive index structure constant ( Cn2) for many climates and seasons are not available, an indirect approach is developed in which Cn2 is estimated from the mesoscale atmospheric model outputs. In previous work, we have presented an approach that a state-of-the-art mesoscale atmospheric model called Weather Research and Forecasting (WRF) model coupled with Monin-Obukhov Similarity (MOS) theory which can be used to estimate surface layer Cn2 over the ocean. Here this paper is focused on surface layer Cn2 over snow and sea ice, which is the extending of estimating surface layer Cn2 utilizing WRF model for ground-based optical application requirements. This powerful approach is validated against the corresponding 9-day Cn2 data from a field campaign of the 30th Chinese National Antarctic Research Expedition (CHINARE). We employ several statistical operators to assess how this approach performs. Besides, we present an independent analysis of this approach performance using the contingency tables. Such a method permits us to provide supplementary key information with respect to statistical operators. These methods make our analysis more robust and permit us to confirm the excellent performances of this approach. The reasonably good agreement in trend and magnitude is found between estimated values and measurements overall, and the estimated Cn2 values are even better than the ones obtained by this approach over the ocean surface layer. The encouraging performance of this approach has a concrete practical implementation of ground-based optical applications over snow and sea ice.

  8. Use of natural geochemical tracers to improve reservoir simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Huseby, O.; Chatzichristos, C.; Sagen, J.; Muller, J.; Kleven, R.; Bennett, B.; Larter, S.; Stubos, A.K.; Adler, P.M.

    2005-01-01

    This article introduces a methodology for integrating geochemical data in reservoir simulations to improve hydrocarbon reservoir models. The method exploits routine measurements of naturally existing inorganic ion concentration in hydrocarbon reservoir production wells, and uses the ions as non-partitioning water tracers. The methodology is demonstrated on a North Sea field case, using the field's reservoir model, together with geochemical information (SO{sub 4}{sup 2}, Mg{sup 2+} K{sup +}, Ba{sup 2+}, Sr{sup 2+}, Ca{sup 2+}, Cl{sup -} concentrations) from the field's producers. From the data-set we show that some of the ions behave almost as ideal sea-water tracers, i.e. without sorption to the matrix, ion-exchange with the matrix or scale-formation with other ions in the formation water. Moreover, the dataset shows that ion concentrations in pure formation-water vary according to formation. This information can be used to allocate produced water to specific water-producing zones in commingled production. Based on an evaluation of the applicability of the available data, one inorganic component, SO{sub 4}{sup 2}, is used as a natural seawater tracer. Introducing SO{sub 4}{sup 2} as a natural tracer in a tracer simulation has revealed a potential for improvements of the reservoir model. By tracking the injected seawater it was possible to identify underestimated fault lengths in the reservoir model. The demonstration confirms that geochemical data are valuable additional information for reservoir characterization, and shows that integration of geochemical data into reservoir simulation procedures can improve reservoir simulation models. (author)

  9. Improved dust representation in the Community Atmosphere Model

    Science.gov (United States)

    Albani, S.; Mahowald, N. M.; Perry, A. T.; Scanza, R. A.; Zender, C. S.; Heavens, N. G.; Maggi, V.; Kok, J. F.; Otto-Bliesner, B. L.

    2014-09-01

    Aerosol-climate interactions constitute one of the major sources of uncertainty in assessing changes in aerosol forcing in the anthropocene as well as understanding glacial-interglacial cycles. Here we focus on improving the representation of mineral dust in the Community Atmosphere Model and assessing the impacts of the improvements in terms of direct effects on the radiative balance of the atmosphere. We simulated the dust cycle using different parameterization sets for dust emission, size distribution, and optical properties. Comparing the results of these simulations with observations of concentration, deposition, and aerosol optical depth allows us to refine the representation of the dust cycle and its climate impacts. We propose a tuning method for dust parameterizations to allow the dust module to work across the wide variety of parameter settings which can be used within the Community Atmosphere Model. Our results include a better representation of the dust cycle, most notably for the improved size distribution. The estimated net top of atmosphere direct dust radiative forcing is -0.23 ± 0.14 W/m2 for present day and -0.32 ± 0.20 W/m2 at the Last Glacial Maximum. From our study and sensitivity tests, we also derive some general relevant findings, supporting the concept that the magnitude of the modeled dust cycle is sensitive to the observational data sets and size distribution chosen to constrain the model as well as the meteorological forcing data, even within the same modeling framework, and that the direct radiative forcing of dust is strongly sensitive to the optical properties and size distribution used.

  10. The semantic similarity ensemble

    Directory of Open Access Journals (Sweden)

    Andrea Ballatore

    2013-12-01

    Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.

  11. An improved model for the Earth's gravity field

    Science.gov (United States)

    Tapley, B. D.; Shum, C. K.; Yuan, D. N.; Ries, J. C.; Schutz, B. E.

    1989-01-01

    An improved model for the Earth's gravity field, TEG-1, was determined using data sets from fourteen satellites, spanning the inclination ranges from 15 to 115 deg, and global surface gravity anomaly data. The satellite measurements include laser ranging data, Doppler range-rate data, and satellite-to-ocean radar altimeter data measurements, which include the direct height measurement and the differenced measurements at ground track crossings (crossover measurements). Also determined was another gravity field model, TEG-1S, which included all the data sets in TEG-1 with the exception of direct altimeter data. The effort has included an intense scrutiny of the gravity field solution methodology. The estimated parameters included geopotential coefficients complete to degree and order 50 with selected higher order coefficients, ocean and solid Earth tide parameters, Doppler tracking station coordinates and the quasi-stationary sea surface topography. Extensive error analysis and calibration of the formal covariance matrix indicate that the gravity field model is a significant improvement over previous models and can be used for general applications in geodesy.

  12. Managing health care decisions and improvement through simulation modeling.

    Science.gov (United States)

    Forsberg, Helena Hvitfeldt; Aronsson, Håkan; Keller, Christina; Lindblad, Staffan

    2011-01-01

    Simulation modeling is a way to test changes in a computerized environment to give ideas for improvements before implementation. This article reviews research literature on simulation modeling as support for health care decision making. The aim is to investigate the experience and potential value of such decision support and quality of articles retrieved. A literature search was conducted, and the selection criteria yielded 59 articles derived from diverse applications and methods. Most met the stated research-quality criteria. This review identified how simulation can facilitate decision making and that it may induce learning. Furthermore, simulation offers immediate feedback about proposed changes, allows analysis of scenarios, and promotes communication on building a shared system view and understanding of how a complex system works. However, only 14 of the 59 articles reported on implementation experiences, including how decision making was supported. On the basis of these articles, we proposed steps essential for the success of simulation projects, not just in the computer, but also in clinical reality. We also presented a novel concept combining simulation modeling with the established plan-do-study-act cycle for improvement. Future scientific inquiries concerning implementation, impact, and the value for health care management are needed to realize the full potential of simulation modeling.

  13. From undefined red smear cheese consortia to minimal model communities both exhibiting similar anti-listerial activity on a cheese-like matrix.

    Science.gov (United States)

    Imran, M; Desmasures, N; Vernoux, J-P

    2010-12-01

    Starting from one undefined cheese smear consortium exhibiting anti-listerial activity (signal) at 15 °C, 50 yeasts and 39 bacteria were identified by partial rDNA sequencing. Construction of microbial communities was done either by addition or by erosion approach with the aim to obtain minimal communities having similar signal to that of the initial smear. The signal of these microbial communities was monitored in cheese microcosm for 14 days under ripening conditions. In the addition scheme, strains having significant signals were mixed step by step. Five-member communities, obtained by addition of a Gram negative bacterium to two yeasts and two Gram positive bacteria, enhanced the signal dramatically contrary to six-member communities including two Gram negative bacteria. In the erosion approach, a progressive reduction of 89 initial strains was performed. While intermediate communities (89, 44 and 22 members) exhibited a lower signal than initial smear consortium, eleven- and six-member communities gave a signal almost as efficient. It was noteworthy that the final minimal model communities obtained by erosion and addition approaches both had anti-listerial activity while consisting of different strains. In conclusion, some minimal model communities can have higher anti-listerial effectiveness than individual strains or the initial 89 micro-organisms from smear. Thus, microbial interactions are involved in the production and modulation of anti-listerial signals in cheese surface communities. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Improved stoves in India: A study of sustainable business models

    International Nuclear Information System (INIS)

    Shrimali, Gireesh; Slaski, Xander; Thurber, Mark C.; Zerriffi, Hisham

    2011-01-01

    Burning of biomass for cooking is associated with health problems and climate change impacts. Many previous efforts to disseminate improved stoves – primarily by governments and NGOs – have not been successful. Based on interviews with 12 organizations selling improved biomass stoves, we assess the results to date and future prospects of commercial stove operations in India. Specifically, we consider how the ability of these businesses to achieve scale and become self-sustaining has been influenced by six elements of their respective business models: design, customers targeted, financing, marketing, channel strategy, and organizational characteristics. The two companies with the most stoves in the field shared in common generous enterprise financing, a sophisticated approach to developing a sales channel, and many person-years of management experience in marketing and operations. And yet the financial sustainability of improved stove sales to households remains far from assured. The only company in our sample with demonstrated profitability is a family-owned business selling to commercial rather than household customers. The stove sales leader is itself now turning to the commercial segment to maintain flagging cash flow, casting doubt on the likelihood of large positive impacts on health from sales to households in the near term. - Highlights: ► Business models to sell improved stoves can be viable in India. ► Commercial stove efforts may not be able to deliver all the benefits hoped for. ► The government could play a useful role if policies are targeted and well thought-out. ► Develops models for that hard-to-define entity mixing business and charity.

  15. Improved Hydrology over Peatlands in a Global Land Modeling System

    Science.gov (United States)

    Bechtold, M.; Delannoy, G.; Reichle, R.; Koster, R.; Mahanama, S.; Roose, Dirk

    2018-01-01

    Peatlands of the Northern Hemisphere represent an important carbon pool that mainly accumulated since the last ice age under permanently wet conditions in specific geological and climatic settings. The carbon balance of peatlands is closely coupled to water table dynamics. Consequently, the future carbon balance over peatlands is strongly dependent on how hydrology in peatlands will react to changing boundary conditions, e.g. due to climate change or regional water level drawdown of connected aquifers or streams. Global land surface modeling over organic-rich regions can provide valuable global-scale insights on where and how peatlands are in transition due to changing boundary conditions. However, the current global land surface models are not able to reproduce typical hydrological dynamics in peatlands well. We implemented specific structural and parametric changes to account for key hydrological characteristics of peatlands into NASA's GEOS-5 Catchment Land Surface Model (CLSM, Koster et al. 2000). The main modifications pertain to the modeling of partial inundation, and the definition of peatland-specific runoff and evapotranspiration schemes. We ran a set of simulations on a high performance cluster using different CLSM configurations and validated the results with a newly compiled global in-situ dataset of water table depths in peatlands. The results demonstrate that an update of soil hydraulic properties for peat soils alone does not improve the performance of CLSM over peatlands. However, structural model changes for peatlands are able to improve the skill metrics for water table depth. The validation results for the water table depth indicate a reduction of the bias from 2.5 to 0.2 m, and an improvement of the temporal correlation coefficient from 0.5 to 0.65, and from 0.4 to 0.55 for the anomalies. Our validation data set includes both bogs (rain-fed) and fens (ground and/or surface water influence) and reveals that the metrics improved less for fens. In

  16. Life course models: improving interpretation by consideration of total effects.

    Science.gov (United States)

    Green, Michael J; Popham, Frank

    2017-06-01

    Life course epidemiology has used models of accumulation and critical or sensitive periods to examine the importance of exposure timing in disease aetiology. These models are usually used to describe the direct effects of exposures over the life course. In comparison with consideration of direct effects only, we show how consideration of total effects improves interpretation of these models, giving clearer notions of when it will be most effective to intervene. We show how life course variation in the total effects depends on the magnitude of the direct effects and the stability of the exposure. We discuss interpretation in terms of total, direct and indirect effects and highlight the causal assumptions required for conclusions as to the most effective timing of interventions. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association.

  17. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit

    2015-04-16

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  18. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit; Dave, Akshat; Ghanem, Bernard

    2015-01-01

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  19. Effect of quantum learning model in improving creativity and memory

    Science.gov (United States)

    Sujatmika, S.; Hasanah, D.; Hakim, L. L.

    2018-04-01

    Quantum learning is a combination of many interactions that exist during learning. This model can be applied by current interesting topic, contextual, repetitive, and give opportunities to students to demonstrate their abilities. The basis of the quantum learning model are left brain theory, right brain theory, triune, visual, auditorial, kinesthetic, game, symbol, holistic, and experiential learning theory. Creativity plays an important role to be success in the working world. Creativity shows alternatives way to problem-solving or creates something. Good memory plays a role in the success of learning. Through quantum learning, students will use all of their abilities, interested in learning and create their own ways of memorizing concepts of the material being studied. From this idea, researchers assume that quantum learning models can improve creativity and memory of the students.

  20. Dialect topic modeling for improved consumer medical search.

    Science.gov (United States)

    Crain, Steven P; Yang, Shuang-Hong; Zha, Hongyuan; Jiao, Yu

    2010-11-13

    Access to health information by consumers is hampered by a fundamental language gap. Current attempts to close the gap leverage consumer oriented health information, which does not, however, have good coverage of slang medical terminology. In this paper, we present a Bayesian model to automatically align documents with different dialects (slang, common and technical) while extracting their semantic topics. The proposed diaTM model enables effective information retrieval, even when the query contains slang words, by explicitly modeling the mixtures of dialects in documents and the joint influence of dialects and topics on word selection. Simulations using consumer questions to retrieve medical information from a corpus of medical documents show that diaTM achieves a 25% improvement in information retrieval relevance by nDCG@5 over an LDA baseline.

  1. Dialect Topic Modeling for Improved Consumer Medical Search

    Energy Technology Data Exchange (ETDEWEB)

    Crain, Steven P. [Georgia Institute of Technology; Yang, Shuang-Hong [Georgia Institute of Technology; Zha, Hongyuan [Georgia Institute of Technology; Jiao, Yu [ORNL

    2010-01-01

    Access to health information by consumers is ham- pered by a fundamental language gap. Current attempts to close the gap leverage consumer oriented health information, which does not, however, have good coverage of slang medical terminology. In this paper, we present a Bayesian model to automatically align documents with different dialects (slang, com- mon and technical) while extracting their semantic topics. The proposed diaTM model enables effective information retrieval, even when the query contains slang words, by explicitly modeling the mixtures of dialects in documents and the joint influence of dialects and topics on word selection. Simulations us- ing consumer questions to retrieve medical information from a corpus of medical documents show that diaTM achieves a 25% improvement in information retrieval relevance by nDCG@5 over an LDA baseline.

  2. Improvements and new features in the IRI-2000 model

    International Nuclear Information System (INIS)

    Bilitza, D.

    2002-01-01

    This paper describes the changes that were implemented in the new version of the COSPAR/URSI International Reference Ionosphere (IRI-2000). These changes are: (1) two new options for the electron density in the D-region, (2) a better functional description of the electron density in the E-F merging region, (3) inclusion of the F1 layer occurrence probability as a new parameter, (4) a new model for the bottomside parameters B 0 and B 1 that greatly improves the representation at low and equatorial latitudes during high solar activities, (5) inclusion of a model for foF2 storm-time updating, (6) a new option for the electron temperature in the topside ionosphere, and (7) inclusion of a model for the equatorial F region ion drift. The main purpose of this paper is to provide the IRI users with examples of the effects of these changes. (author)

  3. Thermal Modeling Method Improvements for SAGE III on ISS

    Science.gov (United States)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was

  4. Improving the quality of clinical coding: a comprehensive audit model

    Directory of Open Access Journals (Sweden)

    Hamid Moghaddasi

    2014-04-01

    Full Text Available Introduction: The review of medical records with the aim of assessing the quality of codes has long been conducted in different countries. Auditing medical coding, as an instructive approach, could help to review the quality of codes objectively using defined attributes, and this in turn would lead to improvement of the quality of codes. Method: The current study aimed to present a model for auditing the quality of clinical codes. The audit model was formed after reviewing other audit models, considering their strengths and weaknesses. A clear definition was presented for each quality attribute and more detailed criteria were then set for assessing the quality of codes. Results: The audit tool (based on the quality attributes included legibility, relevancy, completeness, accuracy, definition and timeliness; led to development of an audit model for assessing the quality of medical coding. Delphi technique was then used to reassure the validity of the model. Conclusion: The inclusive audit model designed could provide a reliable and valid basis for assessing the quality of codes considering more quality attributes and their clear definition. The inter-observer check suggested in the method of auditing is of particular importance to reassure the reliability of coding.

  5. Gender similarities and differences.

    Science.gov (United States)

    Hyde, Janet Shibley

    2014-01-01

    Whether men and women are fundamentally different or similar has been debated for more than a century. This review summarizes major theories designed to explain gender differences: evolutionary theories, cognitive social learning theory, sociocultural theory, and expectancy-value theory. The gender similarities hypothesis raises the possibility of theorizing gender similarities. Statistical methods for the analysis of gender differences and similarities are reviewed, including effect sizes, meta-analysis, taxometric analysis, and equivalence testing. Then, relying mainly on evidence from meta-analyses, gender differences are reviewed in cognitive performance (e.g., math performance), personality and social behaviors (e.g., temperament, emotions, aggression, and leadership), and psychological well-being. The evidence on gender differences in variance is summarized. The final sections explore applications of intersectionality and directions for future research.

  6. Crop Model Improvement Reduces the Uncertainty of the Response to Temperature of Multi-Model Ensembles

    Science.gov (United States)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli

    2016-01-01

    To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.

  7. Improving patient handover between teams using a business improvement model: PDSA cycle.

    Science.gov (United States)

    Luther, Vishal; Hammersley, Daniel; Chekairi, Ahmed

    2014-01-01

    Medical admission units are continuously under pressure to move patients off the unit to outlying medical wards and allow for new admissions. In a typical district general hospital, doctors working in these medical wards reported that, on average, three patients each week arrived from the medical admission unit before any handover was received, and a further two patients arrived without any handover at all. A quality improvement project was therefore conducted using a 'Plan, Do, Study, Act' cycle model for improvement to address this issue. P - Plan: as there was no framework to support doctors with handover, a series of standard handover procedures were designed. D - Do: the procedures were disseminated to all staff, and championed by key stakeholders, including the clinical director and matron of the medical admission unit. S - STUDY: Measurements were repeated 3 months later and showed no change in the primary end points. A - ACT: The post take ward round sheet was redesigned, creating a checkbox for a medical admission unit doctor to document that handover had occurred. Nursing staff were prohibited from moving the patient off the ward until this had been completed. This later evolved into a separate handover sheet. Six months later, a repeat study revealed that only one patient each week was arriving before or without a verbal handover. Using a 'Plan, Do, Study, Act' business improvement tool helped to improve patient care.

  8. Personality Similarity and Work-Related Outcomes among African-American Nursing Personnel: A Test of the Supplementary Model of Person-Environment Congruence.

    Science.gov (United States)

    Day, David V.; Bedeian, Arthur G.

    1995-01-01

    Data from 206 nursing service employees (171 African American) and a 5-factor taxonomy of personality were used to test effects of personality similarity on job satisfaction, performance, and tenure. Tenure was significantly predicted by satisfaction and similarity in conscientiousness. No association was found between personality similarity and…

  9. 3D mapping, hydrodynamics and modelling of the freshwater-brine mixing zone in salt flats similar to the Salar de Atacama (Chile)

    Science.gov (United States)

    Marazuela, M. A.; Vázquez-Suñé, E.; Custodio, E.; Palma, T.; García-Gil, A.; Ayora, C.

    2018-06-01

    Salt flat brines are a major source of minerals and especially lithium. Moreover, valuable wetlands with delicate ecologies are also commonly present at the margins of salt flats. Therefore, the efficient and sustainable exploitation of the brines they contain requires detailed knowledge about the hydrogeology of the system. A critical issue is the freshwater-brine mixing zone, which develops as a result of the mass balance between the recharged freshwater and the evaporating brine. The complex processes occurring in salt flats require a three-dimensional (3D) approach to assess the mixing zone geometry. In this study, a 3D map of the mixing zone in a salt flat is presented, using the Salar de Atacama as an example. This mapping procedure is proposed as the basis of computationally efficient three-dimensional numerical models, provided that the hydraulic heads of freshwater and mixed waters are corrected based on their density variations to convert them into brine heads. After this correction, the locations of lagoons and wetlands that are characteristic of the marginal zones of the salt flats coincide with the regional minimum water (brine) heads. The different morphologies of the mixing zone resulting from this 3D mapping have been interpreted using a two-dimensional (2D) flow and transport numerical model of an idealized cross-section of the mixing zone. The result of the model shows a slope of the mixing zone that is similar to that obtained by 3D mapping and lower than in previous models. To explain this geometry, the 2D model was used to evaluate the effects of heterogeneity in the mixing zone geometry. The higher the permeability of the upper aquifer is, the lower the slope and the shallower the mixing zone become. This occurs because most of the freshwater lateral recharge flows through the upper aquifer due to its much higher transmissivity, thus reducing the freshwater head. The presence of a few meters of highly permeable materials in the upper part of

  10. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    Science.gov (United States)

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  11. Improved Dynamic Modeling of the Cascade Distillation Subsystem and Integration with Models of Other Water Recovery Subsystems

    Science.gov (United States)

    Perry, Bruce; Anderson, Molly

    2015-01-01

    The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.

  12. Implications of improved Higgs mass calculations for supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, O. [Imperial College, London (United Kingdom). High Energy Physics Group; Dolan, M.J. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Theory Group; Ellis, J. [King' s College, London (United Kingdom). Theoretical Particle Physics and Cosmology Group; and others

    2014-03-15

    We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, M{sub h}, in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyze the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of BR(B{sub s}→μ{sup +}μ{sup -}) and ATLAS searches for E{sub T} events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours tan βsimilar 10, though not in the NUHM1 or NUHM2.

  13. Improving Safe Sleep Modeling in the Hospital through Policy Implementation.

    Science.gov (United States)

    Heitmann, Rachel; Nilles, Ester K; Jeans, Ashley; Moreland, Jackie; Clarke, Chris; McDonald, Morgan F; Warren, Michael D

    2017-11-01

    Introduction Sleep-related infant deaths are major contributors to Tennessee's high infant mortality rate. The purpose of this initiative was to evaluate the impact of policy-based efforts to improve modeling of safe sleep practices by health care providers in hospital settings across Tennessee. Methods Safe sleep policies were developed and implemented at 71 hospitals in Tennessee. Policies, at minimum, were required to address staff training on the American Academy of Pediatrics' safe sleep recommendations, correct modeling of infant safe sleep practices, and parent education. Hospital data on process measures related to training and results of crib audits were compiled for analysis. Results The overall observance of infants who were found with any risk factors for unsafe sleep decreased 45.6% (p ≤ 0.001) from the first crib audit to the last crib audit. Significant decreases were noted for specific risk factors, including infants found asleep not on their back, with a toy or object in the crib, and not sleeping in a crib. Significant improvements were observed at hospitals where printed materials or video were utilized for training staff compared to face-to-face training. Discussion Statewide implementation of the hospital policy intervention resulted in significant reductions in infants found in unsafe sleep situations. The most common risk factors for sleep-related infant deaths can be modeled in hospitals. This effort has the potential to reduce sleep-related infant deaths and ultimately infant mortality.

  14. Policy modeling for energy efficiency improvement in US industry

    International Nuclear Information System (INIS)

    Worrell, Ernst; Price, Lynn; Ruth, Michael

    2001-01-01

    We are at the beginning of a process of evaluating and modeling the contribution of policies to improve energy efficiency. Three recent policy studies trying to assess the impact of energy efficiency policies in the United States are reviewed. The studies represent an important step in the analysis of climate change mitigation strategies. All studies model the estimated policy impact, rather than the policy itself. Often the policy impacts are based on assumptions, as the effects of a policy are not certain. Most models only incorporate economic (or price) tools, which recent studies have proven to be insufficient to estimate the impacts, costs and benefits of mitigation strategies. The reviewed studies are a first effort to capture the effects of non-price policies. The studies contribute to a better understanding of the role of policies in improving energy efficiency and mitigating climate change. All policy scenarios results in substantial energy savings compared to the baseline scenario used, as well as substantial net benefits to the U.S. economy

  15. Peer Assessment with Online Tools to Improve Student Modeling

    Science.gov (United States)

    Atkins, Leslie J.

    2012-11-01

    Introductory physics courses often require students to develop precise models of phenomena and represent these with diagrams, including free-body diagrams, light-ray diagrams, and maps of field lines. Instructors expect that students will adopt a certain rigor and precision when constructing these diagrams, but we want that rigor and precision to be an aid to sense-making rather than meeting seemingly arbitrary requirements set by the instructor. By giving students the authority to develop their own models and establish requirements for their diagrams, the sense that these are arbitrary requirements diminishes and students are more likely to see modeling as a sense-making activity. The practice of peer assessment can help students take ownership; however, it can be difficult for instructors to manage. Furthermore, it is not without risk: students can be reluctant to critique their peers, they may view this as the job of the instructor, and there is no guarantee that students will employ greater rigor and precision as a result of peer assessment. In this article, we describe one approach for peer assessment that can establish norms for diagrams in a way that is student driven, where students retain agency and authority in assessing and improving their work. We show that such an approach does indeed improve students' diagrams and abilities to assess their own work, without sacrificing students' authority and agency.

  16. Making Benefit Transfers Work: Deriving and Testing Principles for Value Transfers for Similar and Dissimilar Sites Using a Case Study of the Non-Market Benefits of Water Quality Improvements Across Europe

    DEFF Research Database (Denmark)

    Bateman, Ian; Brouwer, Roy; Ferreri, Silvia

    2011-01-01

    We implement a controlled, multi-site experiment to develop and test guidance principles for benefits transfers. These argue that when transferring across relatively similar sites, simple mean value transfers are to be preferred but that when sites are relatively dissimilar then value function...... transfers will yield lower errors. The paper also provides guidance on the appropriate specification of transferable value functions arguing that these should be developed from theoretical rather than ad-hoc statistical approaches. These principles are tested via a common format valuation study of water...... quality improvements across five countries. While this provides an idealised tested, results support the above principles and suggest directions for future transfer studies....

  17. Improvement of airfoil trailing edge bluntness noise model

    Directory of Open Access Journals (Sweden)

    Wei Jun Zhu

    2016-02-01

    Full Text Available In this article, airfoil trailing edge bluntness noise is investigated using both computational aero-acoustic and semi-empirical approach. For engineering purposes, one of the most commonly used prediction tools for trailing edge noise are based on semi-empirical approaches, for example, the Brooks, Pope, and Marcolini airfoil noise prediction model developed by Brooks, Pope, and Marcolini (NASA Reference Publication 1218, 1989. It was found in previous study that the Brooks, Pope, and Marcolini model tends to over-predict noise at high frequencies. Furthermore, it was observed that this was caused by a lack in the model to predict accurately noise from blunt trailing edges. For more physical understanding of bluntness noise generation, in this study, we also use an advanced in-house developed high-order computational aero-acoustic technique to investigate the details associated with trailing edge bluntness noise. The results from the numerical model form the basis for an improved Brooks, Pope, and Marcolini trailing edge bluntness noise model.

  18. Improved water density feedback model for pressurized water reactors

    International Nuclear Information System (INIS)

    Casadei, A.L.

    1976-01-01

    An improved water density feedback model has been developed for neutron diffusion calculations of PWR cores. This work addresses spectral effects on few-group cross sections due to water density changes, and water density predictions considering open channel and subcooled boiling effects. An homogenized spectral model was also derived using the unit assembly diffusion method for employment in a coarse mesh 3D diffusion computer program. The spectral and water density evaluation models described were incorporated in a 3D diffusion code, and neutronic calculations for a typical PWR were completed for both nominal and accident conditions. Comparison of neutronic calculations employing the open versus the closed channel model for accident conditions indicates that significant safety margin increases can be obtained if subcooled boiling and open channel effects are considered in accident calculations. This is attributed to effects on both core reactivity and power distribution, which result in increased margin to fuel degradation limits. For nominal operating conditions, negligible differences in core reactivity and power distribution exist since flow redistribution and subcooled voids are not significant at such conditions. The results serve to confirm the conservatism of currently employed closed channel feedback methods in accident analysis, and indicate that the model developed in this work can contribute to show increased safety margins for certain accidents

  19. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    Science.gov (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  20. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement.

    Science.gov (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J; Hammer, Graeme L

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation.

  1. Improving Permafrost Hydrology Prediction Through Data-Model Integration

    Science.gov (United States)

    Wilson, C. J.; Andresen, C. G.; Atchley, A. L.; Bolton, W. R.; Busey, R.; Coon, E.; Charsley-Groffman, L.

    2017-12-01

    The CMIP5 Earth System Models were unable to adequately predict the fate of the 16GT of permafrost carbon in a warming climate due to poor representation of Arctic ecosystem processes. The DOE Office of Science Next Generation Ecosystem Experiment, NGEE-Arctic project aims to reduce uncertainty in the Arctic carbon cycle and its impact on the Earth's climate system by improved representation of the coupled physical, chemical and biological processes that drive how much buried carbon will be converted to CO2 and CH4, how fast this will happen, which form will dominate, and the degree to which increased plant productivity will offset increased soil carbon emissions. These processes fundamentally depend on permafrost thaw rate and its influence on surface and subsurface hydrology through thermal erosion, land subsidence and changes to groundwater flow pathways as soil, bedrock and alluvial pore ice and massive ground ice melts. LANL and its NGEE colleagues are co-developing data and models to better understand controls on permafrost degradation and improve prediction of the evolution of permafrost and its impact on Arctic hydrology. The LANL Advanced Terrestrial Simulator was built using a state of the art HPC software framework to enable the first fully coupled 3-dimensional surface-subsurface thermal-hydrology and land surface deformation simulations to simulate the evolution of the physical Arctic environment. Here we show how field data including hydrology, snow, vegetation, geochemistry and soil properties, are informing the development and application of the ATS to improve understanding of controls on permafrost stability and permafrost hydrology. The ATS is being used to inform parameterizations of complex coupled physical, ecological and biogeochemical processes for implementation in the DOE ACME land model, to better predict the role of changing Arctic hydrology on the global climate system. LA-UR-17-26566.

  2. Similarities and Differences between Individuals Seeking Treatment for Gambling Problems vs. Alcohol and Substance Use Problems in Relation to the Progressive Model of Self-stigma

    Directory of Open Access Journals (Sweden)

    Belle Gavriel-Fried

    2017-06-01

    Full Text Available Aims: People with gambling as well as substance use problems who are exposed to public stigmatization may internalize and apply it to themselves through a mechanism known as self-stigma. This study implemented the Progressive Model for Self-Stigma which consists four sequential interrelated stages: awareness, agreement, application and harm on three groups of individuals with gambling, alcohol and other substance use problems. It explored whether the two guiding assumptions of this model (each stage is precondition for the following stage which are trickle-down in nature, and correlations between proximal stages should be larger than correlations between more distant stages would differentiate people with gambling problems from those with alcohol and other substance use problems in terms of their patterns of self-stigma and in terms of the stages in the model.Method: 37 individuals with gambling problems, 60 with alcohol problems and 51 with drug problems who applied for treatment in rehabilitation centers in Israel in 2015–2016 were recruited. They completed the Self-stigma of Mental Illness Scale-Short Form which was adapted by changing the term “mental health” to gambling, alcohol or drugs, and the DSM-5-diagnostic criteria for gambling, alcohol or drug disorder.Results: The assumptions of the model were broadly confirmed: a repeated measures ANCOVA revealed that in all three groups there was a difference between first two stages (aware and agree and the latter stages (apply and harm. In addition, the gambling group differed from the drug use and alcohol groups on the awareness stage: individuals with gambling problems were less likely to be aware of stigma than people with substance use or alcohol problems.Conclusion: The internalization of stigma among individuals with gambling problems tends to work in a similar way as for those with alcohol or drug problems. The differences between the gambling group and the alcohol and other

  3. Improving statistical reasoning theoretical models and practical implications

    CERN Document Server

    Sedlmeier, Peter

    1999-01-01

    This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.

  4. Geomechanical Modeling for Improved CO2 Storage Security

    Science.gov (United States)

    Rutqvist, J.; Rinaldi, A. P.; Cappa, F.; Jeanne, P.; Mazzoldi, A.; Urpi, L.; Vilarrasa, V.; Guglielmi, Y.

    2017-12-01

    This presentation summarizes recent modeling studies on geomechanical aspects related to Geologic Carbon Sequestration (GCS,) including modeling potential fault reactivation, seismicity and CO2 leakage. The model simulations demonstrates that the potential for fault reactivation and the resulting seismic magnitude as well as the potential for creating a leakage path through overburden sealing layers (caprock) depends on a number of parameters such as fault orientation, stress field, and rock properties. The model simulations further demonstrate that seismic events large enough to be felt by humans requires brittle fault properties as well as continuous fault permeability allowing for the pressure to be distributed over a large fault patch to be ruptured at once. Heterogeneous fault properties, which are commonly encountered in faults intersecting multilayered shale/sandstone sequences, effectively reduce the likelihood of inducing felt seismicity and also effectively impede upward CO2 leakage. Site specific model simulations of the In Salah CO2 storage site showed that deep fractured zone responses and associated seismicity occurred in the brittle fractured sandstone reservoir, but at a very substantial reservoir overpressure close to the magnitude of the least principal stress. It is suggested that coupled geomechanical modeling be used to guide the site selection and assisting in identification of locations most prone to unwanted and damaging geomechanical changes, and to evaluate potential consequence of such unwanted geomechanical changes. The geomechanical modeling can be used to better estimate the maximum sustainable injection rate or reservoir pressure and thereby provide for improved CO2 storage security. Whether damaging geomechanical changes could actually occur very much depends on the local stress field and local reservoir properties such the presence of ductile rock and faults (which can aseismically accommodate for the stress and strain induced by

  5. Cerebellar oxidative DNA damage and altered DNA methylation in the BTBR T+tf/J mouse model of autism and similarities with human post mortem cerebellum.

    Directory of Open Access Journals (Sweden)

    Svitlana Shpyleva

    Full Text Available The molecular pathogenesis of autism is complex and involves numerous genomic, epigenomic, proteomic, metabolic, and physiological alterations. Elucidating and understanding the molecular processes underlying the pathogenesis of autism is critical for effective clinical management and prevention of this disorder. The goal of this study is to investigate key molecular alterations postulated to play a role in autism and their role in the pathophysiology of autism. In this study we demonstrate that DNA isolated from the cerebellum of BTBR T+tf/J mice, a relevant mouse model of autism, and from human post-mortem cerebellum of individuals with autism, are both characterized by an increased levels of 8-oxo-7-hydrodeoxyguanosine (8-oxodG, 5-methylcytosine (5mC, and 5-hydroxymethylcytosine (5hmC. The increase in 8-oxodG and 5mC content was associated with a markedly reduced expression of the 8-oxoguanine DNA-glycosylase 1 (Ogg1 and increased expression of de novo DNA methyltransferases 3a and 3b (Dnmt3a and Dnmt3b. Interestingly, a rise in the level of 5hmC occurred without changes in the expression of ten-eleven translocation expression 1 (Tet1 and Tet2 genes, but significantly correlated with the presence of 8-oxodG in DNA. This finding and similar elevation in 8-oxodG in cerebellum of individuals with autism and in the BTBR T+tf/J mouse model warrant future large-scale studies to specifically address the role of OGG1 alterations in pathogenesis of autism.

  6. Similarity or difference?

    DEFF Research Database (Denmark)

    Villadsen, Anders Ryom

    2013-01-01

    While the organizational structures and strategies of public organizations have attracted substantial research attention among public management scholars, little research has explored how these organizational core dimensions are interconnected and influenced by pressures for similarity....... In this paper I address this topic by exploring the relation between expenditure strategy isomorphism and structure isomorphism in Danish municipalities. Different literatures suggest that organizations exist in concurrent pressures for being similar to and different from other organizations in their field......-shaped relation exists between expenditure strategy isomorphism and structure isomorphism in a longitudinal quantitative study of Danish municipalities....

  7. Improvements to the RADIOM non-LTE model

    Science.gov (United States)

    Busquet, M.; Colombant, D.; Klapisch, M.; Fyfe, D.; Gardner, J.

    2009-12-01

    In 1993, we proposed the RADIOM model [M. Busquet, Phys. Fluids 85 (1993) 4191] where an ionization temperature T z is used to derive non-LTE properties from LTE data. T z is obtained from an "extended Saha equation" where unbalanced transitions, like radiative decay, give the non-LTE behavior. Since then, major improvements have been made. T z has been shown to be more than a heuristic value, but describes the actual distribution of excited and ionized states and can be understood as an "effective temperature". Therefore we complement the extended Saha equation by introducing explicitly the auto-ionization/dielectronic capture. Also we use the SCROLL model to benchmark the computed values of T z.

  8. Modelling the Role of Human Resource Management in Continuous Improvement

    DEFF Research Database (Denmark)

    Jørgensen, Frances; Hyland, Paul; Kofoed, Lise B.

    2006-01-01

    Although it is widely acknowledged that both Human Resource Management (HRM) and Continuous Improvement have the potential to positively influencing organizational performance, very little attention has been given to how certain HRM practices may support CI, and consequently, a company...... developed by de Leede and Looise (2005) serve as the framework for examining how specific bundles of HRM practices utilized during different phases of the CI implementation process may contribute to sustained organizational performance and enhanced operational performance. The primary contribution...... of the paper is theoretical in nature, as the model developed provides a greater understanding of how HRM can contribute to CI; however, the model also has practical value in that it suggests important relationships between various HRM practices and the behaviors necessary for successful CI. The paper...

  9. Improved Noninterferometric Test of Collapse Models Using Ultracold Cantilevers

    Science.gov (United States)

    Vinante, A.; Mezzena, R.; Falferi, P.; Carlesso, M.; Bassi, A.

    2017-09-01

    Spontaneous collapse models predict that a weak force noise acts on any mechanical system, as a consequence of the collapse of the wave function. Significant upper limits on the collapse rate have been recently inferred from precision mechanical experiments, such as ultracold cantilevers and the space mission LISA Pathfinder. Here, we report new results from an experiment based on a high-Q cantilever cooled to millikelvin temperatures, which is potentially able to improve the current bounds on the continuous spontaneous localization (CSL) model by 1 order of magnitude. High accuracy measurements of the cantilever thermal fluctuations reveal a nonthermal force noise of unknown origin. This excess noise is compatible with the CSL heating predicted by Adler. Several physical mechanisms able to explain the observed noise have been ruled out.

  10. Re-engineering pre-employment check-up systems: a model for improving health services.

    Science.gov (United States)

    Rateb, Said Abdel Hakim; El Nouman, Azza Abdel Razek; Rateb, Moshira Abdel Hakim; Asar, Mohamed Naguib; El Amin, Ayman Mohammed; Gad, Saad abdel Aziz; Mohamed, Mohamed Salah Eldin

    2011-01-01

    The purpose of this paper is to develop a model for improving health services provided by the pre-employment medical fitness check-up system affiliated to Egypt's Health Insurance Organization (HIO). Operations research, notably system re-engineering, is used in six randomly selected centers and findings before and after re-engineering are compared. The re-engineering model follows a systems approach, focusing on three areas: structure, process and outcome. The model is based on six main components: electronic booking, standardized check-up processes, protected medical documents, advanced archiving through an electronic content management (ECM) system, infrastructure development, and capacity building. The model originates mainly from customer needs and expectations. The centers' monthly customer flow increased significantly after re-engineering. The mean time spent per customer cycle improved after re-engineering--18.3 +/- 5.5 minutes as compared to 48.8 +/- 14.5 minutes before. Appointment delay was also significantly decreased from an average 18 to 6.2 days. Both beneficiaries and service providers were significantly more satisfied with the services after re-engineering. The model proves that re-engineering program costs are exceeded by increased revenue. Re-engineering in this study involved multiple structure and process elements. The literature review did not reveal similar re-engineering healthcare packages. Therefore, each element was compared separately. This model is highly recommended for improving service effectiveness and efficiency. This research is the first in Egypt to apply the re-engineering approach to public health systems. Developing user-friendly models for service improvement is an added value.

  11. Molecular structure based property modeling: Development/ improvement of property models through a systematic property-data-model analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol Shivajirao; Sarup, Bent; Sin, Gürkan

    2013-01-01

    models. To make the property-data-model analysis fast and efficient, an approach based on the “molecular structure similarity criteria” to identify molecules (mono-functional, bi-functional, etc.) containing specified set of structural parameters (that is, groups) is employed. The method has been applied...

  12. Flooding Experiments and Modeling for Improved Reactor Safety

    International Nuclear Information System (INIS)

    Solmos, M.; Hogan, K.J.; VIerow, K.

    2008-01-01

    Countercurrent two-phase flow and 'flooding' phenomena in light water reactor systems are being investigated experimentally and analytically to improve reactor safety of current and future reactors. The aspects that will be better clarified are the effects of condensation and tube inclination on flooding in large diameter tubes. The current project aims to improve the level of understanding of flooding mechanisms and to develop an analysis model for more accurate evaluations of flooding in the pressurizer surge line of a Pressurized Water Reactor (PWR). Interest in flooding has recently increased because Countercurrent Flow Limitation (CCFL) in the AP600 pressurizer surge line can affect the vessel refill rate following a small break LOCA and because analysis of hypothetical severe accidents with the current flooding models in reactor safety codes shows that these models represent the largest uncertainty in analysis of steam generator tube creep rupture. During a hypothetical station blackout without auxiliary feedwater recovery, should the hot leg become voided, the pressurizer liquid will drain to the hot leg and flooding may occur in the surge line. The flooding model heavily influences the pressurizer emptying rate and the potential for surge line structural failure due to overheating and creep rupture. The air-water test results in vertical tubes are presented in this paper along with a semi-empirical correlation for the onset of flooding. The unique aspects of the study include careful experimentation on large-diameter tubes and an integrated program in which air-water testing provides benchmark knowledge and visualization data from which to conduct steam-water testing

  13. Improving default risk prediction using Bayesian model uncertainty techniques.

    Science.gov (United States)

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  14. Formation of algae growth constitutive relations for improved algae modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Gharagozloo, Patricia E.; Drewry, Jessica Louise.

    2013-01-01

    This SAND report summarizes research conducted as a part of a two year Laboratory Directed Research and Development (LDRD) project to improve our abilities to model algal cultivation. Algae-based biofuels have generated much excitement due to their potentially large oil yield from relatively small land use and without interfering with the food or water supply. Algae mitigate atmospheric CO2 through metabolism. Efficient production of algal biofuels could reduce dependence on foreign oil by providing a domestic renewable energy source. Important factors controlling algal productivity include temperature, nutrient concentrations, salinity, pH, and the light-to-biomass conversion rate. Computational models allow for inexpensive predictions of algae growth kinetics in these non-ideal conditions for various bioreactor sizes and geometries without the need for multiple expensive measurement setups. However, these models need to be calibrated for each algal strain. In this work, we conduct a parametric study of key marine algae strains and apply the findings to a computational model.

  15. Comparing Harmonic Similarity Measures

    NARCIS (Netherlands)

    de Haas, W.B.; Robine, M.; Hanna, P.; Veltkamp, R.C.; Wiering, F.

    2010-01-01

    We present an overview of the most recent developments in polyphonic music retrieval and an experiment in which we compare two harmonic similarity measures. In contrast to earlier work, in this paper we specifically focus on the symbolic chord description as the primary musical representation and

  16. Improving student success using predictive models and data visualisations

    Directory of Open Access Journals (Sweden)

    Hanan Ayad

    2012-08-01

    Full Text Available The need to educate a competitive workforce is a global problem. In the US, for example, despite billions of dollars spent to improve the educational system, approximately 35% of students never finish high school. The drop rate among some demographic groups is as high as 50–60%. At the college level in the US only 30% of students graduate from 2-year colleges in 3 years or less and approximately 50% graduate from 4-year colleges in 5 years or less. A basic challenge in delivering global education, therefore, is improving student success. By student success we mean improving retention, completion and graduation rates. In this paper we describe a Student Success System (S3 that provides a holistic, analytical view of student academic progress.1 The core of S3 is a flexible predictive modelling engine that uses machine intelligence and statistical techniques to identify at-risk students pre-emptively. S3 also provides a set of advanced data visualisations for reaching diagnostic insights and a case management tool for managing interventions. S3's open modular architecture will also allow integration and plug-ins with both open and proprietary software. Powered by learning analytics, S3 is intended as an end-to-end solution for identifying at-risk students, understanding why they are at risk, designing interventions to mitigate that risk and finally closing the feedback look by tracking the efficacy of the applied intervention.

  17. An Effective Model for Improving Global Health Nursing Competence.

    Science.gov (United States)

    Kang, Sun-Joo

    2016-01-01

    This paper proposed an effective model for improving global health nursing competence among undergraduate students. A descriptive case study was conducted by evaluation of four implemented programs by the author. All programs were conducted with students majoring in nursing and healthcare, where the researcher was a program director, professor, or facilitator. These programs were analyzed in terms of students' needs assessment, program design, and implementation and evaluation factors. The concept and composition of global nursing competence, identified within previous studies, were deemed appropriate in all of our programs. Program composition varied from curricular to extracurricular domains. During the implementation phase, some of the programs included non-Korean students to improve cultural diversity and overcome language barriers. Qualitative and quantitative surveys were conducted to assess program efficacy. Data triangulation from students' reflective journals was examined. Additionally, students' awareness regarding changes within global health nursing, improved critical thinking, cultural understanding, and global leadership skills were investigated pre- and post-program implementation. The importance of identifying students' needs regarding global nursing competence when developing appropriate curricula is discussed.

  18. An Effective Model for Improving Global Health Nursing Competence

    Directory of Open Access Journals (Sweden)

    Sunjoo Kang

    2016-09-01

    Full Text Available This paper developed an effective model for improving global health nursing competence among undergraduate students. A descriptive case study was conducted by implementing four programs. All programs were conducted with students majoring nursing and healthcare, where the researcher was a program director, professor, or facilitator. These programs were analyzed in terms of students’ needs assessment, program design, and implementation and evaluation factors. The concept and composition of global nursing competence, identified within previous studies, were deemed appropriate in all of our programs. Program composition varied from curricular to extracurricular domains. During the implementation phase, most of the programs included non-Korean students to improve cultural diversity and overcome language barriers. Qualitative and quantitative surveys were conducted to assess program efficacy. Data triangulation from students’ reflective journals was examined. Additionally, students’ awareness regarding changes within global health nursing, improved critical thinking, cultural understanding, and global leadership skills were investigated pre and post-program implementation. We discuss how identifying students’ needs regarding global nursing competence when developing appropriate curricula.

  19. Improved transcranial magnetic stimulation coil design with realistic head modeling

    Science.gov (United States)

    Crowther, Lawrence; Hadimani, Ravi; Jiles, David

    2013-03-01

    We are investigating Transcranial magnetic stimulation (TMS) as a noninvasive technique based on electromagnetic induction which causes stimulation of the neurons in the brain. TMS can be used as a pain-free alternative to conventional electroconvulsive therapy (ECT) which is still widely implemented for treatment of major depression. Development of improved TMS coils capable of stimulating subcortical regions could also allow TMS to replace invasive deep brain stimulation (DBS) which requires surgical implantation of electrodes in the brain. Our new designs allow new applications of the technique to be established for a variety of diagnostic and therapeutic applications of psychiatric disorders and neurological diseases. Calculation of the fields generated inside the head is vital for the use of this method for treatment. In prior work we have implemented a realistic head model, incorporating inhomogeneous tissue structures and electrical conductivities, allowing the site of neuronal activation to be accurately calculated. We will show how we utilize this model in the development of novel TMS coil designs to improve the depth of penetration and localization of stimulation produced by stimulator coils.

  20. Policy improvement by a model-free Dyna architecture.

    Science.gov (United States)

    Hwang, Kao-Shing; Lo, Chia-Yue

    2013-05-01

    The objective of this paper is to accelerate the process of policy improvement in reinforcement learning. The proposed Dyna-style system combines two learning schemes, one of which utilizes a temporal difference method for direct learning; the other uses relative values for indirect learning in planning between two successive direct learning cycles. Instead of establishing a complicated world model, the approach introduces a simple predictor of average rewards to actor-critic architecture in the simulation (planning) mode. The relative value of a state, defined as the accumulated differences between immediate reward and average reward, is used to steer the improvement process in the right direction. The proposed learning scheme is applied to control a pendulum system for tracking a desired trajectory to demonstrate its adaptability and robustness. Through reinforcement signals from the environment, the system takes the appropriate action to drive an unknown dynamic to track desired outputs in few learning cycles. Comparisons are made between the proposed model-free method, a connectionist adaptive heuristic critic, and an advanced method of Dyna-Q learning in the experiments of labyrinth exploration. The proposed method outperforms its counterparts in terms of elapsed time and convergence rate.

  1. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    Science.gov (United States)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  2. Brief Communication: Upper Air Relaxation in RACMO2 Significantly Improves Modelled Interannual Surface Mass Balance Variability in Antarctica

    Science.gov (United States)

    van de Berg, W. J.; Medley, B.

    2016-01-01

    The Regional Atmospheric Climate Model (RACMO2) has been a powerful tool for improving surface mass balance (SMB) estimates from GCMs or reanalyses. However, new yearly SMB observations for West Antarctica show that the modelled interannual variability in SMB is poorly simulated by RACMO2, in contrast to ERA-Interim, which resolves this variability well. In an attempt to remedy RACMO2 performance, we included additional upper-air relaxation (UAR) in RACMO2. With UAR, the correlation to observations is similar for RACMO2 and ERA-Interim. The spatial SMB patterns and ice-sheet-integrated SMB modelled using UAR remain very similar to the estimates of RACMO2 without UAR. We only observe an upstream smoothing of precipitation in regions with very steep topography like the Antarctic Peninsula. We conclude that UAR is a useful improvement for regional climate model simulations, although results in regions with steep topography should be treated with care.

  3. The Urgent Need for Improved Climate Models and Predictions

    Science.gov (United States)

    Goddard, Lisa; Baethgen, Walter; Kirtman, Ben; Meehl, Gerald

    2009-09-01

    An investment over the next 10 years of the order of US$2 billion for developing improved climate models was recommended in a report (http://wcrp.wmo.int/documents/WCRP_WorldModellingSummit_Jan2009.pdf) from the May 2008 World Modelling Summit for Climate Prediction, held in Reading, United Kingdom, and presented by the World Climate Research Programme. The report indicated that “climate models will, as in the past, play an important, and perhaps central, role in guiding the trillion dollar decisions that the peoples, governments and industries of the world will be making to cope with the consequences of changing climate.” If trillions of dollars are going to be invested in making decisions related to climate impacts, an investment of $2 billion, which is less than 0.1% of that amount, to provide better climate information seems prudent. One example of investment in adaptation is the World Bank's Climate Investment Fund, which has drawn contributions of more than $6 billion for work on clean technologies and adaptation efforts in nine pilot countries and two pilot regions. This is just the beginning of expenditures on adaptation efforts by the World Bank and other mechanisms, focusing on only a small fraction of the nations of the world and primarily aimed at anticipated anthropogenic climate change. Moreover, decisions are being made now, all around the world—by individuals, companies, and governments—that affect people and their livelihoods today, not just 50 or more years in the future. Climate risk management, whether related to projects of the scope of the World Bank's or to the planning and decisions of municipalities, will be best guided by meaningful climate information derived from observations of the past and model predictions of the future.

  4. A workflow learning model to improve geovisual analytics utility.

    Science.gov (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  5. Automated geographic atrophy segmentation for SD-OCT images using region-based C-V model via local similarity factor.

    Science.gov (United States)

    Niu, Sijie; de Sisternes, Luis; Chen, Qiang; Leng, Theodore; Rubin, Daniel L

    2016-02-01

    Age-related macular degeneration (AMD) is the leading cause of blindness among elderly individuals. Geographic atrophy (GA) is a phenotypic manifestation of the advanced stages of non-exudative AMD. Determination of GA extent in SD-OCT scans allows the quantification of GA-related features, such as radius or area, which could be of important value to monitor AMD progression and possibly identify regions of future GA involvement. The purpose of this work is to develop an automated algorithm to segment GA regions in SD-OCT images. An en face GA fundus image is generated by averaging the axial intensity within an automatically detected sub-volume of the three dimensional SD-OCT data, where an initial coarse GA region is estimated by an iterative threshold segmentation method and an intensity profile set, and subsequently refined by a region-based Chan-Vese model with a local similarity factor. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to quantitatively evaluate the automated segmentation algorithm. We compared results obtained by the proposed algorithm, manual segmentation by graders, a previously proposed method, and experimental commercial software. When compared to a manually determined gold standard, our algorithm presented a mean overlap ratio (OR) of 81.86% and 70% for the first and second data sets, respectively, while the previously proposed method OR was 72.60% and 65.88% for the first and second data sets, respectively, and the experimental commercial software OR was 62.40% for the second data set.

  6. Morphology transition of raft-model membrane induced by osmotic pressure: Formation of double-layered vesicle similar to an endo- and/or exocytosis

    International Nuclear Information System (INIS)

    Onai, Teruaki; Hirai, Mitsuhiro

    2010-01-01

    The effect of osmotic pressure on the structure of large uni-lamellar vesicle (LUV) of the lipid mixtures of monosialoganglioside (G M1 )-cholesterol-dioleoyl-phosphatidylcholine (DOPC) was studies by using wide-angle X-ray scattering (WAXS) method. The molar ratios of the mixtures were 0.1/0.1/1, 0/0.1/1, and 0/0/1. The ternary lipid mixture is a model of lipid rafts. The value of osmotic pressure was varied from 0 to 4.16x10 5 N/m 2 by adding the polyvinylpyrrolidone (PVP) in the range from 0 to 25 % w/v. In the case of the mixtures without G M1 , the rise of the osmotic pressure just enhances the multi-lamellar stacking with deceasing the inter-lamellar spacing. On the other hand, the mixture containing G M1 shows the structural transition from a uni-lamellar vesicle to a double-layered vesicle (a liposome including a smaller one inside) by the rise of osmotic pressure. In this morphology transition the total surface area of the double-layered vesicle is mostly as same as that of the LUV at the initial state. The polar head region of G M1 is bulky and highly hydrophilic due to the oligosaccharide chain containing a sialic acid residue. Then, the present results suggest that the existence of G M1 in the outer-leaflet of the LUV is essentially important for such a double-layered vesicle formation. Alternatively, a phenomenon similar to an endo- and/or exocytosis in cells can be caused simply by a variation of osmotic pressure.

  7. Improvement of snowpack simulations in a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Jin, J.; Miller, N.L.

    2011-01-10

    To improve simulations of regional-scale snow processes and related cold-season hydroclimate, the Community Land Model version 3 (CLM3), developed by the National Center for Atmospheric Research (NCAR), was coupled with the Pennsylvania State University/NCAR fifth-generation Mesoscale Model (MM5). CLM3 physically describes the mass and heat transfer within the snowpack using five snow layers that include liquid water and solid ice. The coupled MM5–CLM3 model performance was evaluated for the snowmelt season in the Columbia River Basin in the Pacific Northwestern United States using gridded temperature and precipitation observations, along with station observations. The results from MM5–CLM3 show a significant improvement in the SWE simulation, which has been underestimated in the original version of MM5 coupled with the Noah land-surface model. One important cause for the underestimated SWE in Noah is its unrealistic land-surface structure configuration where vegetation, snow and the topsoil layer are blended when snow is present. This study demonstrates the importance of the sheltering effects of the forest canopy on snow surface energy budgets, which is included in CLM3. Such effects are further seen in the simulations of surface air temperature and precipitation in regional weather and climate models such as MM5. In addition, the snow-season surface albedo overestimated by MM5–Noah is now more accurately predicted by MM5–CLM3 using a more realistic albedo algorithm that intensifies the solar radiation absorption on the land surface, reducing the strong near-surface cold bias in MM5–Noah. The cold bias is further alleviated due to a slower snowmelt rate in MM5–CLM3 during the early snowmelt stage, which is closer to observations than the comparable components of MM5–Noah. In addition, the over-predicted precipitation in the Pacific Northwest as shown in MM5–Noah is significantly decreased in MM5 CLM3 due to the lower evaporation resulting from the

  8. Improvements to TRAC models of condensing stratified flow. Pt. 1

    International Nuclear Information System (INIS)

    Zhang, Q.; Leslie, D.C.

    1991-12-01

    Direct contact condensation in stratified flow is an important phenomenon in LOCA analyses. In this report, the TRAC interfacial heat transfer model for stratified condensing flow has been assessed against the Bankoff experiments. A rectangular channel option has been added to the code to represent the experimental geometry. In almost all cases the TRAC heat transfer coefficient (HTC) over-predicts the condensation rates and in some cases it is so high that the predicted steam is sucked in from the normal outlet in order to conserve mass. Based on their cocurrent and countercurrent condensing flow experiments, Bankoff and his students (Lim 1981, Kim 1985) developed HTC models from the two cases. The replacement of the TRAC HTC with either of Bankoff's models greatly improves the predictions of condensation rates in the experiment with cocurrent condensing flow. However, the Bankoff HTC for countercurrent flow is preferable because it is based only on the local quantities rather than on the quantities averaged from the inlet. (author)

  9. An improved gravity model for Mars: Goddard Mars Model-1 (GMM-1)

    Science.gov (United States)

    Smith, D. E.; Lerch, F. J.; Nerem, R. S.; Zuber, M. T.; Patel, G. B.; Fricke, S. K.; Lemoine, F. G.

    1993-01-01

    Doppler tracking data of three orbiting spacecraft have been reanalyzed to develop a new gravitational field model for the planet Mars, GMM-1 (Goddard Mars Model-1). This model employs nearly all available data, consisting of approximately 1100 days of S-bank tracking data collected by NASA's Deep Space Network from the Mariner 9, and Viking 1 and Viking 2 spacecraft, in seven different orbits, between 1971 and 1979. GMM-1 is complete to spherical harmonic degree and order 50, which corresponds to a half-wavelength spatial resolution of 200-300 km where the data permit. GMM-1 represents satellite orbits with considerably better accuracy than previous Mars gravity models and shows greater resolution of identifiable geological structures. The notable improvement in GMM-1 over previous models is a consequence of several factors: improved computational capabilities, the use of optimum weighting and least-squares collocation solution techniques which stabilized the behavior of the solution at high degree and order, and the use of longer satellite arcs than employed in previous solutions that were made possible by improved force and measurement models. The inclusion of X-band tracking data from the 379-km altitude, near-polar orbiting Mars Observer spacecraft should provide a significant improvement over GMM-1, particularly at high latitudes where current data poorly resolves the gravitational signature of the planet.

  10. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  11. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  12. Drought, Fire and Insects in Western US Forests: Observations to Improve Regional Land System Modeling

    Science.gov (United States)

    Law, B. E.; Yang, Z.; Berner, L. T.; Hicke, J. A.; Buotte, P.; Hudiburg, T. W.

    2015-12-01

    Drought, fire and insects are major disturbances in the western US, and conditions are expected to get warmer and drier in the future. We combine multi-scale observations and modeling with CLM4.5 to examine the effects of these disturbances on forests in the western US. We modified the Community Land Model, CLM4.5, to improve simulated drought-related mortality in forests, and prediction of insect outbreaks under future climate conditions. We examined differences in plant traits that represent species variation in sensitivity to drought, and redefined plant groupings in PFTs. Plant traits, including sapwood area: leaf area ratio and stemwood density were strongly correlated with water availability during the ecohydrologic year. Our database of co-located observations of traits for 30 tree species was used to produce parameterization of the model by species groupings according to similar traits. Burn area predicted by the new fire model in CLM4.5 compares well with recent years of GFED data, but has a positive bias compared with Landsat-based MTBS. Biomass mortality over recent decades increased, and was captured well by the model in general, but missed mortality trends of some species. Comparisons with AmeriFlux data showed that the model with dynamic tree mortality only (no species trait improvements) overestimated GPP in dry years compared with flux data at semi-arid sites, and underestimated GPP at more mesic sites that experience dry summers. Simulations with both dynamic tree mortality and species trait parameters improved estimates of GPP by 17-22%; differences between predicted and observed NEE were larger. Future projections show higher productivity from increased atmospheric CO2 and warming that somewhat offsets drought and fire effects over the next few decades. Challenges include representation of hydraulic failure in models, and availability of species trait and carbon/water process data in disturbance- and drought-impacted regions.

  13. Improved SVR Model for Multi-Layer Buildup Factor Calculation

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2006-01-01

    The accuracy of point kernel method applied in gamma ray dose rate calculations in shielding design and radiation safety analysis is limited by the accuracy of buildup factors used in calculations. Although buildup factors for single-layer shields are well defined and understood, buildup factors for stratified shields represent a complex physical problem that is hard to express in mathematical terms. The traditional approach for expressing buildup factors of multi-layer shields is through semi-empirical formulas obtained by fitting the results of transport theory or Monte Carlo calculations. Such an approach requires an ad-hoc definition of the fitting function and often results with numerous and usually inadequately explained and defined correction factors added to the final empirical formula. Even more, finally obtained formulas are generally limited to a small number of predefined combinations of materials within relatively small range of gamma ray energies and shield thicknesses. Recently, a new approach has been suggested by the authors involving one of machine learning techniques called Support Vector Machines, i.e., Support Vector Regression (SVR). Preliminary investigations performed for double-layer shields revealed great potential of the method, but also pointed out some drawbacks of the developed model, mostly related to the selection of one of the parameters describing the problem (material atomic number), and the method in which the model was designed to evolve during the learning process. It is the aim of this paper to introduce a new parameter (single material buildup factor) that is to replace the existing material atomic number as an input parameter. The comparison of two models generated by different input parameters has been performed. The second goal is to improve the evolution process of learning, i.e., the experimental computational procedure that provides a framework for automated construction of complex regression models of predefined

  14. Gravity model improvement using GEOS-3 (GEM 9 and 10)

    Science.gov (United States)

    Lerch, F. J.; Klosko, S. M.; Laubscher, R. E.; Wagner, C. A.

    1977-01-01

    The use of collocation permitted GEM 9 to be a larger field than previous derived satellite models, GEM 9 having harmonics complete to 20 x 20 with selected higher degree terms. The satellite data set has approximately 840,000 observations, of which 200,000 are laser ranges taken on 9 satellites equipped with retroreflectors. GEM 10 is complete to 22 x 22 with selected higher degree terms out to degree and order 30 amounting to a total of 592 coefficients. Comparisons with surface gravity and altimeter data indicate a substantial improvement in GEM 9 over previous satellite solutions; GEM 9 is in even closer agreement with surface data than the previously published GEM 6 solution which contained surface gravity. In particular the free air gravity anomalies calculated from GEM 9 and a surface gravity solution are in excellent agreement for the high degree terms.

  15. Wrong, but useful: regional species distribution models may not be improved by range-wide data under biased sampling.

    Science.gov (United States)

    El-Gabbas, Ahmed; Dormann, Carsten F

    2018-02-01

    Species distribution modeling (SDM) is an essential method in ecology and conservation. SDMs are often calibrated within one country's borders, typically along a limited environmental gradient with biased and incomplete data, making the quality of these models questionable. In this study, we evaluated how adequate are national presence-only data for calibrating regional SDMs. We trained SDMs for Egyptian bat species at two different scales: only within Egypt and at a species-specific global extent. We used two modeling algorithms: Maxent and elastic net, both under the point-process modeling framework. For each modeling algorithm, we measured the congruence of the predictions of global and regional models for Egypt, assuming that the lower the congruence, the lower the appropriateness of the Egyptian dataset to describe the species' niche. We inspected the effect of incorporating predictions from global models as additional predictor ("prior") to regional models, and quantified the improvement in terms of AUC and the congruence between regional models run with and without priors. Moreover, we analyzed predictive performance improvements after correction for sampling bias at both scales. On average, predictions from global and regional models in Egypt only weakly concur. Collectively, the use of priors did not lead to much improvement: similar AUC and high congruence between regional models calibrated with and without priors. Correction for sampling bias led to higher model performance, whatever prior used, making the use of priors less pronounced. Under biased and incomplete sampling, the use of global bats data did not improve regional model performance. Without enough bias-free regional data, we cannot objectively identify the actual improvement of regional models after incorporating information from the global niche. However, we still believe in great potential for global model predictions to guide future surveys and improve regional sampling in data

  16. COUNTERCURRENT FLOW LIMITATION EXPERIMENTS AND MODELING FOR IMPROVED REACTOR SAFETY

    International Nuclear Information System (INIS)

    Vierow, Karen

    2008-01-01

    This project is investigating countercurrent flow and 'flooding' phenomena in light water reactor systems to improve reactor safety of current and future reactors. To better understand the occurrence of flooding in the surge line geometry of a PWR, two experimental programs were performed. In the first, a test facility with an acrylic test section provided visual data on flooding for air-water systems in large diameter tubes. This test section also allowed for development of techniques to form an annular liquid film along the inner surface of the 'surge line' and other techniques which would be difficult to verify in an opaque test section. Based on experiences in the air-water testing and the improved understanding of flooding phenomena, two series of tests were conducted in a large-diameter, stainless steel test section. Air-water test results and steam-water test results were directly compared to note the effect of condensation. Results indicate that, as for smaller diameter tubes, the flooding phenomena is predominantly driven by the hydrodynamics. Tests with the test sections inclined were attempted but the annular film was easily disrupted. A theoretical model for steam venting from inclined tubes is proposed herein and validated against air-water data. Empirical correlations were proposed for air-water and steam-water data. Methods for developing analytical models of the air-water and steam-water systems are discussed, as is the applicability of the current data to the surge line conditions. This report documents the project results from July 1, 2005 through June 30, 2008

  17. Improved Algorithm of SCS-CN Model Parameters in Typical Inland River Basin in Central Asia

    Science.gov (United States)

    Wang, Jin J.; Ding, Jian L.; Zhang, Zhe; Chen, Wen Q.

    2017-02-01

    Rainfall-runoff relationship is the most important factor for hydrological structures, social and economic development on the background of global warmer, especially in arid regions. The aim of this paper is find the suitable method to simulate the runoff in arid area. The Soil Conservation Service Curve Number (SCS-CN) is the most popular and widely applied model for direct runoff estimation. In this paper, we will focus on Wen-quan Basin in source regions of Boertala River. It is a typical valley of inland in Central Asia. First time to use the 16m resolution remote sensing image about high-definition earth observation satellite “Gaofen-1” to provide a high degree accuracy data for land use classification determine the curve number. Use surface temperature/vegetation index (TS/VI) construct 2D scatter plot combine with the soil moisture absorption balance principle calculate the moisture-holding capacity of soil. Using original and parameter algorithm improved SCS-CN model respectively to simulation the runoff. The simulation results show that the improved model is better than original model. Both of them in calibration and validation periods Nash-Sutcliffe efficiency were 0.79, 0.71 and 0.66,038. And relative error were3%, 12% and 17%, 27%. It shows that the simulation accuracy should be further improved and using remote sensing information technology to improve the basic geographic data for the hydrological model has the following advantages: 1) Remote sensing data having a planar characteristic, comprehensive and representative. 2) To get around the bottleneck about lack of data, provide reference to simulation the runoff in similar basin conditions and data-lacking regions.

  18. SIFT Based Vein Recognition Models: Analysis and Improvement

    Directory of Open Access Journals (Sweden)

    Guoqing Wang

    2017-01-01

    Full Text Available Scale-Invariant Feature Transform (SIFT is being investigated more and more to realize a less-constrained hand vein recognition system. Contrast enhancement (CE, compensating for deficient dynamic range aspects, is a must for SIFT based framework to improve the performance. However, evidence of negative influence on SIFT matching brought by CE is analysed by our experiments. We bring evidence that the number of extracted keypoints resulting by gradient based detectors increases greatly with different CE methods, while on the other hand the matching result of extracted invariant descriptors is negatively influenced in terms of Precision-Recall (PR and Equal Error Rate (EER. Rigorous experiments with state-of-the-art and other CE adopted in published SIFT based hand vein recognition system demonstrate the influence. What is more, an improved SIFT model by importing the kernel of RootSIFT and Mirror Match Strategy into a unified framework is proposed to make use of the positive keypoints change and make up for the negative influence brought by CE.

  19. A model to improve efficiency and effectiveness of safeguards measures

    International Nuclear Information System (INIS)

    D'Amato, Eduardo; Llacer, Carlos; Vicens, Hugo

    2001-01-01

    Full text: The main purpose of our current studies is to analyse the measures to be adopted tending to integrate the traditional safeguard measures to the ones stated in the Additional Protocol (AP). A simplified nuclear fuel cycle model is considered to draw some conclusions on the application of integrated safeguard measures. This paper includes a briefing, describing the historical review that gave birth to the A.P. and proposes a model to help the control bodies in the making decision process. In May 1997, the Board of Governors approved the Model Additional Protocol (MAP) which aimed at strengthening the effectiveness and improving the efficiency of safeguard measures. For States under a comprehensive safeguard agreement the measures adopted provide credible assurance on the absence of undeclared nuclear material and activities. In September 1999, the governments of Argentina and Brazil formally announced in the Board of Governors that both countries would start preliminary consultations on one adapted MAP applied to the Agreement between the Republic of Argentina, the Federative Republic of Brazil, the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials and the International Atomic Energy Agency for the Application of Safeguards (Quatripartite Agreement/INFCIRC 435). In December 1999, a first draft of the above mentioned document was provided as a starting point of discussion. During the year 2000 some modifications to the original draft took place. These were the initial steps in the process aiming at reaching the adequate conditions to adhere to the A.P. in each country in a future Having in mind the future AP implementation, the safeguards officers of the Regulatory Body of Argentina (ARN) began to think about the future simultaneous application of the two types of safeguards measures, the traditional and the non traditional ones, what should converge in an integrated system. By traditional safeguards it is understood quantitative

  20. Problem solving based learning model with multiple representations to improve student's mental modelling ability on physics

    Science.gov (United States)

    Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran

    2017-08-01

    Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7Mental Modelling Ability (M-MMA) for 3Mental Modelling Ability (L-MMA) for 0 ≤ x ≤ 3 score. The result shows that problem solving based learning model with multiple representations approach can be an alternative to be applied in improving students' MMA.

  1. Improving the Ni I atomic model for solar and stellar atmospheric models

    International Nuclear Information System (INIS)

    Vieytes, M. C.; Fontenla, J. M.

    2013-01-01

    Neutral nickel (Ni I) is abundant in the solar atmosphere and is one of the important elements that contribute to the emission and absorption of radiation in the spectral range between 1900 and 3900 Å. Previously, the Solar Radiation Physical Modeling (SRPM) models of the solar atmosphere only considered a few levels of this species. Here, we improve the Ni I atomic model by taking into account 61 levels and 490 spectral lines. We compute the populations of these levels in full NLTE using the SRPM code and compare the resulting emerging spectrum with observations. The present atomic model significantly improves the calculation of the solar spectral irradiance at near-UV wavelengths, which is important for Earth atmospheric studies, and particularly for ozone chemistry.

  2. Improving the Ni I atomic model for solar and stellar atmospheric models

    Energy Technology Data Exchange (ETDEWEB)

    Vieytes, M. C. [Instituto de de Astronomía y Física del Espacio, CONICET and UNTREF, Buenos Aires (Argentina); Fontenla, J. M., E-mail: mariela@iafe.uba.ar, E-mail: johnf@digidyna.com [North West Research Associates, 3380 Mitchell Lane, Boulder, CO 80301 (United States)

    2013-06-01

    Neutral nickel (Ni I) is abundant in the solar atmosphere and is one of the important elements that contribute to the emission and absorption of radiation in the spectral range between 1900 and 3900 Å. Previously, the Solar Radiation Physical Modeling (SRPM) models of the solar atmosphere only considered a few levels of this species. Here, we improve the Ni I atomic model by taking into account 61 levels and 490 spectral lines. We compute the populations of these levels in full NLTE using the SRPM code and compare the resulting emerging spectrum with observations. The present atomic model significantly improves the calculation of the solar spectral irradiance at near-UV wavelengths, which is important for Earth atmospheric studies, and particularly for ozone chemistry.

  3. Improvement on The Ellis and Roberts Viability Model

    Directory of Open Access Journals (Sweden)

    Guoyan Zhou

    2016-05-01

    Full Text Available With data sets of germination percent and storage time of seed lot of wheat and sorghum stored at three different storage temperature(t, °C with three different water content (m, % of seeds, together with data set of buckwheat and lettuce reported in literatures, the possibility that seed survival curve were transformed into line by survival proportion and the relationship that logarithm of average viability period (logp50 and standard deviation of seed death distribution in time (δwith t, m and interaction between t and m were analysed. Result indicated that survival proportion transformed seed survival curve to line were much easier than the probability adopted by Ellis and Roberts, and the most important factor affecting logp50 and δ of seed lot was interaction between t and m. Thus, Ellis and Roberts viability model were suggested to be improved as Ki=Vi-p/10K-CWT (t×m to predict longevity of seed lot with initial germination percent unknown, a new model of Gi/G0=A-P/10K-CWT(t×m was constructed to predict longevity of seed lot with initial germination percent already known.

  4. Improvement of PSA Models Using Monitoring and Prognostics

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Gyun Young; Chang, Yoon Suk; Kim, Hyun Dae [Kyung Hee University, Yongin (Korea, Republic of)

    2014-08-15

    Probabilistic Safety Assessment (PSA) has performed a significant role for quantitative decision-making by finding design and operational vulnerability and evaluating cost-benefit in improving such weak points. Especially, it has been widely used as the core methodology for Risk-Informed Applications (RIAs). Even though the nature of PSA seeks realistic results, there are still 'conservative' aspects. The sources for the conservatism come from the assumption of safety analysis and the estimation of failure frequency. Surveillance, Diagnosis, and Prognosis (SDP) utilizing massive database and information technology is worthwhile to be highlighted in terms of the capability of alleviating the conservatism in the conventional PSA. This paper provides enabling techniques to concretize the method to provide time- and condition-dependent risk by integrating a conventional PSA model with condition monitoring and prognostics techniques. We will discuss how to integrate the results with frequency of initiating events (IEs) and failure probability of basic events (BEs). Two illustrative examples will be introduced: how the failure probability of a passive system can be evaluated under different plant conditions and how the IE frequency for Steam Generator Tube Rupture (SGTR) can be updated in terms of operating time. We expect that the proposed PSA model can take a role of annunciator to show the variation of Core Damage Frequency (CDF) in terms of time and operational conditions.

  5. Improved spring model-based collaborative indoor visible light positioning

    Science.gov (United States)

    Luo, Zhijie; Zhang, WeiNan; Zhou, GuoFu

    2016-06-01

    Gaining accuracy with indoor positioning of individuals is important as many location-based services rely on the user's current position to provide them with useful services. Many researchers have studied indoor positioning techniques based on WiFi and Bluetooth. However, they have disadvantages such as low accuracy or high cost. In this paper, we propose an indoor positioning system in which visible light radiated from light-emitting diodes is used to locate the position of receivers. Compared with existing methods using light-emitting diode light, we present a high-precision and simple implementation collaborative indoor visible light positioning system based on an improved spring model. We first estimate coordinate position information using the visible light positioning system, and then use the spring model to correct positioning errors. The system can be employed easily because it does not require additional sensors and the occlusion problem of visible light would be alleviated. We also describe simulation experiments, which confirm the feasibility of our proposed method.

  6. Improved outgassing models for the Landsat-5 thematic mapper

    Science.gov (United States)

    Micijevic, E.; Chander, G.; Hayes, R.W.

    2008-01-01

    The Landsat-5 (L5) Thematic Mapper (TM) detectors of the short wave infrared (SWIR) bands 5 and 7 are maintained on cryogenic temperatures to minimize thermal noise and allow adequate detection of scene energy. Over the instrument's lifetime, gain oscillations are observed in these bands that are caused by an ice-like contaminant that gradually builds up on the window of a dewar that houses these bands' detectors. This process of icing, an effect of material outgassing in space, is detected and characterized through observations of Internal Calibrator (IC) data. Analyses of IC data indicated three to five percent uncertainty in absolute gain estimates due to this icing phenomenon. The thin-film interference lifetime models implemented in the image product generation systems at the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) successfully remove up to 80 percent of the icing effects for the image acquisition period from the satellite's launch in 1984 until 2001; however, their correction ability was found to be much lower for the time thereafter. This study concentrates on improving the estimates of the contaminant film growth rate and the associated change in the period of gain oscillations. The goal is to provide model parameters with the potential to correct 70 to 80 percent of gain uncertainties caused by outgassing effects in L5 TM bands 5 and 7 over the instrument's entire lifetime. ?? 2007 IEEE.

  7. Improved data for integrated modeling of global environmental change

    Science.gov (United States)

    Lotze-Campen, Hermann

    2011-12-01

    The assessment of global environmental changes, their impact on human societies, and possible management options requires large-scale, integrated modeling efforts. These models have to link biophysical with socio-economic processes, and they have to take spatial heterogeneity of environmental conditions into account. Land use change and freshwater use are two key research areas where spatial aggregation and the use of regional average numbers may lead to biased results. Useful insights can only be obtained if processes like economic globalization can be consistently linked to local environmental conditions and resource constraints (Lambin and Meyfroidt 2011). Spatially explicit modeling of environmental changes at the global scale has a long tradition in the natural sciences (Woodward et al 1995, Alcamo et al 1996, Leemans et al 1996). Socio-economic models with comparable spatial detail, e.g. on grid-based land use change, are much less common (Heistermann et al 2006), but are increasingly being developed (Popp et al 2011, Schneider et al 2011). Spatially explicit models require spatially explicit input data, which often constrains their development and application at the global scale. The amount and quality of available data on environmental conditions is growing fast—primarily due to improved earth observation methods. Moreover, systematic efforts for collecting and linking these data across sectors are on the way (www.earthobservations.org). This has, among others, also helped to provide consistent databases on different land cover and land use types (Erb et al 2007). However, spatially explicit data on specific anthropogenic driving forces of global environmental change are still scarce—also because these cannot be collected with satellites or other devices. The basic data on socio-economic driving forces, i.e. population density and wealth (measured as gross domestic product per capita), have been prepared for spatially explicit analyses (CIESIN, IFPRI

  8. Improved Lighthill fish swimming model for bio-inspired robots - Modelling, computational aspects and experimental comparisons.

    OpenAIRE

    Porez , Mathieu; Boyer , Frédéric; Ijspeert , Auke

    2014-01-01

    International audience; The best known analytical model of swimming was originally developed by Lighthill and is known as large amplitude elongated body theory (LAEBT). Recently, this theory has been improved and adapted to robotics through a series of studies [Boyer et al., 2008, 2010; Candelier et al., 2011] ranging from hydrodynamic modelling to mobile multibody system dynamics. This article marks a further step towards the Lighthill theory. The LAEBT is ap- plied to one of the best bio-in...

  9. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  10. Wind scatterometry with improved ambiguity selection and rain modeling

    Science.gov (United States)

    Draper, David Willis

    Although generally accurate, the quality of SeaWinds on QuikSCAT scatterometer ocean vector winds is compromised by certain natural phenomena and retrieval algorithm limitations. This dissertation addresses three main contributors to scatterometer estimate error: poor ambiguity selection, estimate uncertainty at low wind speeds, and rain corruption. A quality assurance (QA) analysis performed on SeaWinds data suggests that about 5% of SeaWinds data contain ambiguity selection errors and that scatterometer estimation error is correlated with low wind speeds and rain events. Ambiguity selection errors are partly due to the "nudging" step (initialization from outside data). A sophisticated new non-nudging ambiguity selection approach produces generally more consistent wind than the nudging method in moderate wind conditions. The non-nudging method selects 93% of the same ambiguities as the nudged data, validating both techniques, and indicating that ambiguity selection can be accomplished without nudging. Variability at low wind speeds is analyzed using tower-mounted scatterometer data. According to theory, below a threshold wind speed, the wind fails to generate the surface roughness necessary for wind measurement. A simple analysis suggests the existence of the threshold in much of the tower-mounted scatterometer data. However, the backscatter does not "go to zero" beneath the threshold in an uncontrolled environment as theory suggests, but rather has a mean drop and higher variability below the threshold. Rain is the largest weather-related contributor to scatterometer error, affecting approximately 4% to 10% of SeaWinds data. A simple model formed via comparison of co-located TRMM PR and SeaWinds measurements characterizes the average effect of rain on SeaWinds backscatter. The model is generally accurate to within 3 dB over the tropics. The rain/wind backscatter model is used to simultaneously retrieve wind and rain from SeaWinds measurements. The simultaneous

  11. A conceptual model to improve performance in virtual teams

    Directory of Open Access Journals (Sweden)

    Shopee Dube

    2016-09-01

    Full Text Available Background: The vast improvement in communication technologies and sophisticated project management tools, methods and techniques has allowed geographically and culturally diverse groups to operate and function in a virtual environment. To succeed in this virtual environment where time and space are becoming increasingly irrelevant, organisations must define new ways of implementing initiatives. This virtual environment phenomenon has brought about the formation of virtual project teams that allow organisations to harness the skills and knowhow of the best resources, irrespective of their location. Objectives: The aim of this article was to investigate performance criteria and develop a conceptual model which can be applied to enhance the success of virtual project teams. There are no clear guidelines of the performance criteria in managing virtual project teams. Method: A qualitative research methodology was used in this article. The purpose of content analysis was to explore the literature to understand the concept of performance in virtual project teams and to summarise the findings of the literature reviewed. Results: The research identified a set of performance criteria for the virtual project teams as follows: leadership, trust, communication, team cooperation, reliability, motivation, comfort and social interaction. These were used to conceptualise the model. Conclusion: The conceptual model can be used in a holistic way to determine the overall performance of the virtual project team, but each factor can be analysed individually to determine the impact on the overall performance. The knowledge of performance criteria for virtual project teams could aid project managers in enhancing the success of these teams and taking a different approach to better manage and coordinate them.

  12. Explicit Modeling of Ancestry Improves Polygenic Risk Scores and BLUP Prediction.

    Science.gov (United States)

    Chen, Chia-Yen; Han, Jiali; Hunter, David J; Kraft, Peter; Price, Alkes L

    2015-09-01

    Polygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color (HC), tanning ability (TA), and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRSs) and best linear unbiased prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R(2) for HC increased by 66% (0.0456-0.0755; P ancestry, which prevents ancestry effects from entering into each SNP effect and being overweighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction. © 2015 WILEY PERIODICALS, INC.

  13. Improving Estimated Optical Constants With MSTM and DDSCAT Modeling

    Science.gov (United States)

    Pitman, K. M.; Wolff, M. J.

    2015-12-01

    We present numerical experiments to determine quantitatively the effects of mineral particle clustering on Mars spacecraft spectral signatures and to improve upon the values of refractive indices (optical constants n, k) derived from Mars dust laboratory analog spectra such as those from RELAB and MRO CRISM libraries. Whereas spectral properties for Mars analog minerals and actual Mars soil are dominated by aggregates of particles smaller than the size of martian atmospheric dust, the analytic radiative transfer (RT) solutions used to interpret planetary surfaces assume that individual, well-separated particles dominate the spectral signature. Both in RT models and in the refractive index derivation methods that include analytic RT approximations, spheres are also over-used to represent nonspherical particles. Part of the motivation is that the integrated effect over randomly oriented particles on quantities such as single scattering albedo and phase function are relatively less than for single particles. However, we have seen in previous numerical experiments that when varying the shape and size of individual grains within a cluster, the phase function changes in both magnitude and slope, thus the "relatively less" effect is more significant than one might think. Here we examine the wavelength dependence of the forward scattering parameter with multisphere T-matrix (MSTM) and discrete dipole approximation (DDSCAT) codes that compute light scattering by layers of particles on planetary surfaces to see how albedo is affected and integrate our model results into refractive index calculations to remove uncertainties in approximations and parameters that can lower the accuracy of optical constants. By correcting the single scattering albedo and phase function terms in the refractive index determinations, our data will help to improve the understanding of Mars in identifying, mapping the distributions, and quantifying abundances for these minerals and will address long

  14. Similar or different?

    DEFF Research Database (Denmark)

    Cornér, Solveig; Pyhältö, Kirsi; Peltonen, Jouni

    2018-01-01

    Previous research has identified researcher community and supervisory support as key determinants of the doctoral journey contributing to students’ persistence and robustness. However, we still know little about cross-cultural variation in the researcher community and supervisory support experien...... counter partners, whereas the Finnish students perceived lower levels of instrumental support than the Danish students. The findings imply that seemingly similar contexts hold valid differences in experienced social support and educational strategies at the PhD level....... experienced by PhD students within the same discipline. This study explores the support experiences of 381 PhD students within the humanities and social sciences from three research-intensive universities in Denmark (n=145) and Finland (n=236). The mixed methods design was utilized. The data were collected...... counter partners. The results also indicated that the only form of support in which the students expressed more matched support than mismatched support was informational support. Further investigation showed that the Danish students reported a high level of mismatch in emotional support than their Finnish...

  15. Similarity solution and Runge Kutta method to a thermal boundary layer model at the entrance region of a circular tube: The Lévêque Approximation

    Directory of Open Access Journals (Sweden)

    Ali Belhocine

    2018-01-01

    Full Text Available In the thermal entrance region, a thermal boundary layer develops and also reaches the circular tube center. The fully developed region is the zone in which the flow is both hydrodynamically and thermally developed. The heat flux will be higher near the inlet because the heat transfer coefficient is highest at the tube inlet where the thickness of the thermal boundary layer is zero and decreases gradually to the fully developed value. In this paper, the assumptions implicit in Leveque's approximation are re-examined, and the analytical solution of the problem with additional boundary conditions, for the temperature field and the boundary layer thickness through the long tube is presented. By defining a similarity variable, the governing equations are reduced to a dimensionless equation with an analytic solution in the entrance region. This report gives justification for the similarity variable via scaling analysis, details the process of converting to a similarity form, and presents a similarity solution. The analytical solutions are then checked against numerical solution programming by Fortran code obtained via using Runge-Kutta fourth order (RK4 method. Finally, others important thermal results obtained from this analysis, such as; approximate Nusselt number in the thermal entrance region was discussed in detail.

  16. Making change last: applying the NHS institute for innovation and improvement sustainability model to healthcare improvement.

    Science.gov (United States)

    Doyle, Cathal; Howe, Cathy; Woodcock, Thomas; Myron, Rowan; Phekoo, Karen; McNicholas, Chris; Saffer, Jessica; Bell, Derek

    2013-10-26

    The implementation of evidence-based treatments to deliver high-quality care is essential to meet the healthcare demands of aging populations. However, the sustainable application of recommended practice is difficult to achieve and variable outcomes well recognised. The NHS Institute for Innovation and Improvement Sustainability Model (SM) was designed to help healthcare teams recognise determinants of sustainability and take action to embed new practice in routine care. This article describes a formative evaluation of the application of the SM by the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care for Northwest London (CLAHRC NWL). Data from project teams' responses to the SM and formal reviews was used to assess acceptability of the SM and the extent to which it prompted teams to take action. Projects were classified as 'engaged,' 'partially engaged' and 'non-engaged.' Quarterly survey feedback data was used to explore reasons for variation in engagement. Score patterns were compared against formal review data and a 'diversity of opinion' measure was derived to assess response variance over time. Of the 19 teams, six were categorized as 'engaged,' six 'partially engaged,' and seven as 'non-engaged.' Twelve teams found the model acceptable to some extent. Diversity of opinion reduced over time. A minority of teams used the SM consistently to take action to promote sustainability but for the majority SM use was sporadic. Feedback from some team members indicates difficulty in understanding and applying the model and negative views regarding its usefulness. The SM is an important attempt to enable teams to systematically consider determinants of sustainability, provide timely data to assess progress, and prompt action to create conditions for sustained practice. Tools such as these need to be tested in healthcare settings to assess strengths and weaknesses and findings disseminated to aid development. This

  17. On-Line Core Thermal-Hydraulic Model Improvement

    International Nuclear Information System (INIS)

    In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok; Shin, Chang Hwan; Hwang, Dae Hyun; Seo, Kyung Won

    2007-02-01

    The objective of this project is to implement a fast-running 4-channel based code CETOP-D in an advanced reactor core protection calculator system(RCOPS). The part required for the on-line calculation of DNBR were extracted from the source of the CETOP-D code based on analysis of the CETOP-D code. The CETOP-D code was revised to maintain the input and output variables which are the same as in CPC DNBR module. Since the DNBR module performs a complex calculation, it is divided into sub-modules per major calculation step. The functional design requirements for the DNBR module is documented and the values of the database(DB) constants were decided. This project also developed a Fortran module(BEST) of the RCOPS Fortran Simulator and a computer code RCOPS-SDNBR to independently calculate DNBR. A test was also conducted to verify the functional design and DB of thermal-hydraulic model which is necessary to calculate the DNBR on-line in RCOPS. The DNBR margin is expected to increase by 2%-3% once the CETOP-D code is used to calculate the RCOPS DNBR. It should be noted that the final DNBR margin improvement could be determined in the future based on overall uncertainty analysis of the RCOPS

  18. Applying Quality Function Deployment Model in Burn Unit Service Improvement.

    Science.gov (United States)

    Keshtkaran, Ali; Hashemi, Neda; Kharazmi, Erfan; Abbasi, Mehdi

    2016-01-01

    Quality function deployment (QFD) is one of the most effective quality design tools. This study applies QFD technique to improve the quality of the burn unit services in Ghotbedin Hospital in Shiraz, Iran. First, the patients' expectations of burn unit services and their priorities were determined through Delphi method. Thereafter, burn unit service specifications were determined through Delphi method. Further, the relationships between the patients' expectations and service specifications and also the relationships between service specifications were determined through an expert group's opinion. Last, the final importance scores of service specifications were calculated through simple additive weighting method. The findings show that burn unit patients have 40 expectations in six different areas. These expectations are in 16 priority levels. Burn units also have 45 service specifications in six different areas. There are four-level relationships between the patients' expectations and service specifications and four-level relationships between service specifications. The most important burn unit service specifications have been identified in this study. The QFD model developed in the study can be a general guideline for QFD planners and executives.

  19. Improving the representation of soluble iron in climate models

    Energy Technology Data Exchange (ETDEWEB)

    Mahowald, Natalie [Cornell Univ., Ithaca, NY (United States)

    2016-11-29

    Funding from this grant supported Rachel Sanza, Yan Zhang and partially Samuel Albani. Substantial progress has been made on inclusion of mineralogy, showing the quality of the simulations, and the impact on radiation in the CAM4 and CAM5 (Scanza et al., 2015). In addition, the elemental distribution has been evaluated (and partially supported by this grant) (Zhang et al., 2015), showing that using spatial distributions of mineralogy, improved resperentation of Fe, Ca and Al are possible, compared to the limited available data. A new intermediate complexity soluble iron scheme was implemented in the Bulk Aerosol Model (BAM), which was completed as part of Rachel Scanza’s PhD thesis. Currently Rachel is writing up at least two first author papers describing the general methods and comparison to observations (Scanza et al., in prep.), as well as papers describing the sensitivity to preindustrial conditions and interannual variability. This work lead to the lead PI being asked to write a commentary in Nature (Mahowald, 2013) and two review papers (Mahowald et al., 2014, Mahowald et al., submitted) and contributed to related papers (Albani et al., 2016, Albani et al., 2014, Albani et al., 2015).

  20. Biomechanical modelling and evaluation of construction jobs for performance improvement.

    Science.gov (United States)

    Parida, Ratri; Ray, Pradip Kumar

    2012-01-01

    Occupational risk factors, such as awkward posture, repetition, lack of rest, insufficient illumination and heavy workload related to construction-related MMH activities may cause musculoskeletal disorders and poor performance of the workers, ergonomic design of construction worksystems was a critical need for improving their health and safety wherein a dynamic biomechanical models were required to be empirically developed and tested at a construction site of Tata Steel, the largest steel making company of India in private sector. In this study, a comprehensive framework is proposed for biomechanical evaluation of shovelling and grinding under diverse work environments. The benefit of such an analysis lies in its usefulness in setting guidelines for designing such jobs with minimization of risks of musculoskeletal disorders (MSDs) and enhancing correct methods of carrying out the jobs leading to reduced fatigue and physical stress. Data based on direct observations and videography were collected for the shovellers and grinders over a number of workcycles. Compressive forces and moments for a number of segments and joints are computed with respect to joint flexion and extension. The results indicate that moments and compressive forces at L5/S1 link are significant for shovellers while moments at elbow and wrist are significant for grinders.

  1. Improving models to predict phenological responses to global change

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, Andrew D. [Harvard College, Cambridge, MA (United States)

    2015-11-25

    The term phenology describes both the seasonal rhythms of plants and animals, and the study of these rhythms. Plant phenological processes, including, for example, when leaves emerge in the spring and change color in the autumn, are highly responsive to variation in weather (e.g. a warm vs. cold spring) as well as longer-term changes in climate (e.g. warming trends and changes in the timing and amount of rainfall). We conducted a study to investigate the phenological response of northern peatland communities to global change. Field work was conducted at the SPRUCE experiment in northern Minnesota, where we installed 10 digital cameras. Imagery from the cameras is being used to track shifts in plant phenology driven by elevated carbon dioxide and elevated temperature in the different SPRUCE experimental treatments. Camera imagery and derived products (“greenness”) is being posted in near-real time on a publicly available web page (http://phenocam.sr.unh.edu/webcam/gallery/). The images will provide a permanent visual record of the progression of the experiment over the next 10 years. Integrated with other measurements collected as part of the SPRUCE program, this study is providing insight into the degree to which phenology may mediate future shifts in carbon uptake and storage by peatland ecosystems. In the future, these data will be used to develop improved models of vegetation phenology, which will be tested against ground observations collected by a local collaborator.

  2. On-Line Core Thermal-Hydraulic Model Improvement

    Energy Technology Data Exchange (ETDEWEB)

    In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok; Shin, Chang Hwan; Hwang, Dae Hyun; Seo, Kyung Won

    2007-02-15

    The objective of this project is to implement a fast-running 4-channel based code CETOP-D in an advanced reactor core protection calculator system(RCOPS). The part required for the on-line calculation of DNBR were extracted from the source of the CETOP-D code based on analysis of the CETOP-D code. The CETOP-D code was revised to maintain the input and output variables which are the same as in CPC DNBR module. Since the DNBR module performs a complex calculation, it is divided into sub-modules per major calculation step. The functional design requirements for the DNBR module is documented and the values of the database(DB) constants were decided. This project also developed a Fortran module(BEST) of the RCOPS Fortran Simulator and a computer code RCOPS-SDNBR to independently calculate DNBR. A test was also conducted to verify the functional design and DB of thermal-hydraulic model which is necessary to calculate the DNBR on-line in RCOPS. The DNBR margin is expected to increase by 2%-3% once the CETOP-D code is used to calculate the RCOPS DNBR. It should be noted that the final DNBR margin improvement could be determined in the future based on overall uncertainty analysis of the RCOPS.

  3. A model for improving endangered species recovery programs

    Science.gov (United States)

    Miller, Brian; Reading, Richard; Conway, Courtney; Jackson, Jerome A.; Hutchins, Michael; Snyder, Noel; Forrest, Steve; Frazier, Jack; Derrickson, Scott

    1994-09-01

    This paper discusses common organizational problems that cause inadequate planning and implementation processes of endangered species recovery across biologically dissimilar species. If these problems occur, even proven biological conservation techniques are jeopardized. We propose a solution that requires accountability in all phases of the restoration process and is based on cooperative input among government agencies, nongovernmental conservation organizations, and the academic community. The first step is formation of a task-oriented recovery team that integrates the best expertise into the planning process. This interdisciplinary team should be composed of people whose skills directly address issues critical for recovery. Once goals and procedures are established, the responsible agency (for example, in the United States, the US Fish and Wildlife Service) could divest some or all of its obligation for implementing the plan, yet still maintain oversight by holding implementing entities contractually accountable. Regular, periodic outside review and public documentation of the recovery team, lead agency, and the accomplishments of implementing bodies would permit evaluation necessary to improve performance. Increased cooperation among agency and nongovernmental organizations provided by this model promises a more efficient use of limited resources toward the conservation of biodiversity.

  4. Developing a particle tracking surrogate model to improve inversion of ground water - Surface water models

    Science.gov (United States)

    Cousquer, Yohann; Pryet, Alexandre; Atteia, Olivier; Ferré, Ty P. A.; Delbart, Célestine; Valois, Rémi; Dupuy, Alain

    2018-03-01

    The inverse problem of groundwater models is often ill-posed and model parameters are likely to be poorly constrained. Identifiability is improved if diverse data types are used for parameter estimation. However, some models, including detailed solute transport models, are further limited by prohibitive computation times. This often precludes the use of concentration data for parameter estimation, even if those data are available. In the case of surface water-groundwater (SW-GW) models, concentration data can provide SW-GW mixing ratios, which efficiently constrain the estimate of exchange flow, but are rarely used. We propose to reduce computational limits by simulating SW-GW exchange at a sink (well or drain) based on particle tracking under steady state flow conditions. Particle tracking is used to simulate advective transport. A comparison between the particle tracking surrogate model and an advective-dispersive model shows that dispersion can often be neglected when the mixing ratio is computed for a sink, allowing for use of the particle tracking surrogate model. The surrogate model was implemented to solve the inverse problem for a real SW-GW transport problem with heads and concentrations combined in a weighted hybrid objective function. The resulting inversion showed markedly reduced uncertainty in the transmissivity field compared to calibration on head data alone.

  5. Intelligent Models Performance Improvement Based on Wavelet Algorithm and Logarithmic Transformations in Suspended Sediment Estimation

    Directory of Open Access Journals (Sweden)

    R. Hajiabadi

    2016-10-01

    data are applied to models training and one year is estimated by each model. Accuracy of models is evaluated by three indexes. These three indexes are mean absolute error (MAE, root mean squared error (RMSE and Nash-Sutcliffecoefficient (NS. Results and Discussion In order to suspended sediment load estimation by intelligent models, different input combination for model training evaluated. Then the best combination of input for each intelligent model is determined and preprocessing is done only for the best combination. Two logarithmic transforms, LN and LOG, considered to data transformation. Daubechies wavelet family is used as wavelet transforms. Results indicate that diagnosing causes Nash Sutcliffe criteria in ANN and GEPincreases 0.15 and 0.14, respectively. Furthermore, RMSE value has been reduced from 199.24 to 141.17 (mg/lit in ANN and from 234.84 to 193.89 (mg/lit in GEP. The impact of the logarithmic transformation approach on the ANN result improvement is similar to diagnosing approach. While the logarithmic transformation approach has an adverse impact on GEP. Nash Sutcliffe criteria, after Ln and Log transformations as preprocessing in GEP model, has been reduced from 0.57 to 0.31 and 0.21, respectively, and RMSE value increases from 234.84 to 298.41 (mg/lit and 318.72 (mg/lit respectively. Results show that data denoising by wavelet transform is effective for improvement of two intelligent model accuracy, while data transformation by logarithmic transformation causes improvement only in artificial neural network. Results of the ANN model reveal that data transformation by LN transfer is better than LOG transfer, however both transfer function cause improvement in ANN results. Also denoising by different wavelet transforms (Daubechies family indicates that in ANN models the wavelet function Db2 is more effective and causes more improvement while on GEP models the wavelet function Db1 (Harr is better. Conclusions: In the present study, two different

  6. Improving Shade Modelling in a Regional River Temperature Model Using Fine-Scale LIDAR Data

    Science.gov (United States)

    Hannah, D. M.; Loicq, P.; Moatar, F.; Beaufort, A.; Melin, E.; Jullian, Y.

    2015-12-01

    Air temperature is often considered as a proxy of the stream temperature to model the distribution areas of aquatic species water temperature is not available at a regional scale. To simulate the water temperature at a regional scale (105 km²), a physically-based model using the equilibrium temperature concept and including upstream-downstream propagation of the thermal signal was developed and applied to the entire Loire basin (Beaufort et al., submitted). This model, called T-NET (Temperature-NETwork) is based on a hydrographical network topology. Computations are made hourly on 52,000 reaches which average 1.7 km long in the Loire drainage basin. The model gives a median Root Mean Square Error of 1.8°C at hourly time step on the basis of 128 water temperature stations (2008-2012). In that version of the model, tree shadings is modelled by a constant factor proportional to the vegetation cover on 10 meters sides the river reaches. According to sensitivity analysis, improving the shade representation would enhance T-NET accuracy, especially for the maximum daily temperatures, which are currently not very well modelized. This study evaluates the most efficient way (accuracy/computing time) to improve the shade model thanks to 1-m resolution LIDAR data available on tributary of the LoireRiver (317 km long and an area of 8280 km²). Two methods are tested and compared: the first one is a spatially explicit computation of the cast shadow for every LIDAR pixel. The second is based on averaged vegetation cover characteristics of buffers and reaches of variable size. Validation of the water temperature model is made against 4 temperature sensors well spread along the stream, as well as two airborne thermal infrared imageries acquired in summer 2014 and winter 2015 over a 80 km reach. The poster will present the optimal length- and crosswise scale to characterize the vegetation from LIDAR data.

  7. Two-Dimensional Magnetotelluric Modelling of Ore Deposits: Improvements in Model Constraints by Inclusion of Borehole Measurements

    Science.gov (United States)

    Kalscheuer, Thomas; Juhojuntti, Niklas; Vaittinen, Katri

    2017-12-01

    functions is used as the initial model for the inversion of the surface impedances, skin-effect transfer functions and vertical magnetic and electric transfer functions. For both synthetic examples, the inversion models resulting from surface and borehole measurements have higher similarity to the true models than models computed exclusively from surface measurements. However, the most prominent improvements were obtained for the first example, in which a deep small-sized ore body is more easily distinguished from a shallow main ore body penetrated by a borehole and the extent of the shadow zone (a conductive artefact) underneath the main conductor is strongly reduced. Formal model error and resolution analysis demonstrated that predominantly the skin-effect transfer functions improve model resolution at depth below the sensors and at distance of ˜ 300-1000 m laterally off a borehole, whereas the vertical electric and magnetic transfer functions improve resolution along the borehole and in its immediate vicinity. Furthermore, we studied the signal levels at depth and provided specifications of borehole magnetic and electric field sensors to be developed in a future project. Our results suggest that three-component SQUID and fluxgate magnetometers should be developed to facilitate borehole MT measurements at signal frequencies above and below 1 Hz, respectively.

  8. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    Science.gov (United States)

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  9. Randomization to a low-carbohydrate diet advice improves health related quality of life compared with a low-fat diet at similar weight-loss in Type 2 diabetes mellitus.

    Science.gov (United States)

    Guldbrand, H; Lindström, T; Dizdar, B; Bunjaku, B; Östgren, C J; Nystrom, F H; Bachrach-Lindström, M

    2014-11-01

    To compare the effects on health-related quality of life (HRQoL) of a 2-year intervention with a low-fat diet (LFD) or a low-carbohydrate diet (LCD) based on four group-meetings to achieve compliance. To describe different aspects of taking part in the intervention following the LFD or LCD. Prospective, randomized trial of 61 adults with Type 2 diabetes mellitus. The SF-36 questionnaire was used at baseline, 6, 12 and 24 months. Patients on LFD aimed for 55-60 energy percent (E%) and those on LCD for 20 E% from carbohydrates. The patients were interviewed about their experiences of the intervention. Mean body-mass-index was 32.7 ± 5.4 kg/m(2) at baseline. Weight-loss did not differ between groups and was maximal at 6 months, LFD: -3.99 ± 4.1 kg, LCD: -4.31 ± 3.6 kg (p<0.001 within groups). There was an increase in the physical component score of SF-36 from 44.1 (10.0) to 46.7 (10.5) at 12 months in the LCD group (p < 0.009) while no change occurred in the LFD group (p < 0.03 between groups). At 12 months the physical function, bodily pain and general health scores improved within the LCD group (p values 0.042-0.009) while there was no change within the LFD group. Weight-changes did not differ between the diet groups while improvements in HRQoL only occurred after one year during treatment with LCD. No changes of HRQoL occurred in the LFD group in spite of a similar reduction in body weight. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  10. Why Is Improvement of Earth System Models So Elusive? Challenges and Strategies From Dust Aerosol Modeling

    Science.gov (United States)

    Miller, R. L.; Pérez García-Pando, C.; Perlwitz, J. P.; Ginoux, P. A.

    2015-12-01

    Past decades have seen an accelerating increase in computing efficiency,while climate models are representing a rapidly widening set ofphysical processes. Yet simulations of some fundamental aspects ofclimate like precipitation or aerosol forcing remain highly uncertainand resistent to progress. Dust aerosol modeling of soil particleslofted by wind erosion has seen a similar conflict between increasingmodel sophistication and remaining uncertainty. Dust aerosols perturbthe energy and water cycles by scattering radiation and acting as icenuclei, while mediating atmospheric chemistry and marinephotosynthesis (and thus the carbon cycle). These effects take placeacross scales from the dimensions of an ice crystal to theplanetary-scale circulation that disperses dust far downwind of itsparent soil. Representing this range leads to several modelingchallenges. Should we limit complexity in our model, which consumescomputer resources and inhibits interpretation? How do we decide if aprocess involving dust is worthy of inclusion within our model? Canwe identify a minimal representation of a complex process that isefficient yet retains the physics relevant to climate? Answeringthese questions about the appropriate degree of representation isguided by model evaluation, which presents several more challenges.How do we proceed if the available observations do not directlyconstrain our process of interest? (This could result from competingprocesses that influence the observed variable and obscure thesignature of our process of interest.) Examples will be presentedfrom dust modeling, with lessons that might be more broadlyapplicable. The end result will either be clinical depression or thereassuring promise of continued gainful employment as the communityconfronts these challenges.

  11. The Role of Perceptual Similarity, Context, and Situation When Selecting Attributes: Considerations Made by 5-6-Year-Olds in Data Modeling Environments

    Science.gov (United States)

    Leavy, Aisling; Hourigan, Mairead

    2018-01-01

    Classroom data modeling involves posing questions, identifying attributes of phenomena, measuring and structuring these attributes, and then composing, revising, and communicating the outcomes. Selecting attributes is a fundamental component of data modeling, and the considerations made when selecting attributes is the focus of this paper. A…

  12. Improving Predictive Modeling in Pediatric Drug Development: Pharmacokinetics, Pharmacodynamics, and Mechanistic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Slikker, William; Young, John F.; Corley, Rick A.; Dorman, David C.; Conolly, Rory B.; Knudsen, Thomas; Erstad, Brian L.; Luecke, Richard H.; Faustman, Elaine M.; Timchalk, Chuck; Mattison, Donald R.

    2005-07-26

    A workshop was conducted on November 18?19, 2004, to address the issue of improving predictive models for drug delivery to developing humans. Although considerable progress has been made for adult humans, large gaps remain for predicting pharmacokinetic/pharmacodynamic (PK/PD) outcome in children because most adult models have not been tested during development. The goals of the meeting included a description of when, during development, infants/children become adultlike in handling drugs. The issue of incorporating the most recent advances into the predictive models was also addressed: both the use of imaging approaches and genomic information were considered. Disease state, as exemplified by obesity, was addressed as a modifier of drug pharmacokinetics and pharmacodynamics during development. Issues addressed in this workshop should be considered in the development of new predictive and mechanistic models of drug kinetics and dynamics in the developing human.

  13. Improved Regional Climate Model Simulation of Precipitation by a Dynamical Coupling to a Hydrology Model

    DEFF Research Database (Denmark)

    Larsen, Morten Andreas Dahl; Drews, Martin; Hesselbjerg Christensen, Jens

    convective precipitation systems. As a result climate model simulations let alone future projections of precipitation often exhibit substantial biases. Here we show that the dynamical coupling of a regional climate model to a detailed fully distributed hydrological model - including groundwater-, overland...... of local precipitation dynamics are seen for time scales of app. Seasonal duration and longer. We show that these results can be attributed to a more complete treatment of land surface feedbacks. The local scale effect on the atmosphere suggests that coupled high-resolution climate-hydrology models...... including a detailed 3D redistribution of sub- and land surface water have a significant potential for improving climate projections even diminishing the need for bias correction in climate-hydrology studies....

  14. Improved Formulations for Air-Surface Exchanges Related to National Security Needs: Dry Deposition Models

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, James G.

    2006-07-01

    The Department of Homeland Security and others rely on results from atmospheric dispersion models for threat evaluation, event management, and post-event analyses. The ability to simulate dry deposition rates is a crucial part of our emergency preparedness capabilities. Deposited materials pose potential hazards from radioactive shine, inhalation, and ingestion pathways. A reliable characterization of these potential exposures is critical for management and mitigation of these hazards. A review of the current status of dry deposition formulations used in these atmospheric dispersion models was conducted. The formulations for dry deposition of particulate materials from am event such as a radiological attack involving a Radiological Detonation Device (RDD) is considered. The results of this effort are applicable to current emergency preparedness capabilities such as are deployed in the Interagency Modeling and Atmospheric Assessment Center (IMAAC), other similar national/regional emergency response systems, and standalone emergency response models. The review concludes that dry deposition formulations need to consider the full range of particle sizes including: 1) the accumulation mode range (0.1 to 1 micron diameter) and its minimum in deposition velocity, 2) smaller particles (less than .01 micron diameter) deposited mainly by molecular diffusion, 3) 10 to 50 micron diameter particles deposited mainly by impaction and gravitational settling, and 4) larger particles (greater than 100 micron diameter) deposited mainly by gravitational settling. The effects of the local turbulence intensity, particle characteristics, and surface element properties must also be addressed in the formulations. Specific areas for improvements in the dry deposition formulations are 1) capability of simulating near-field dry deposition patterns, 2) capability of addressing the full range of potential particle properties, 3) incorporation of particle surface retention/rebound processes, and

  15. Improved Nuclear Reactor and Shield Mass Model for Space Applications

    Science.gov (United States)

    Robb, Kevin

    2004-01-01

    New technologies are being developed to explore the distant reaches of the solar system. Beyond Mars, solar energy is inadequate to power advanced scientific instruments. One technology that can meet the energy requirements is the space nuclear reactor. The nuclear reactor is used as a heat source for which a heat-to-electricity conversion system is needed. Examples of such conversion systems are the Brayton, Rankine, and Stirling cycles. Since launch cost is proportional to the amount of mass to lift, mass is always a concern in designing spacecraft. Estimations of system masses are an important part in determining the feasibility of a design. I worked under Michael Barrett in the Thermal Energy Conversion Branch of the Power & Electric Propulsion Division. An in-house Closed Cycle Engine Program (CCEP) is used for the design and performance analysis of closed-Brayton-cycle energy conversion systems for space applications. This program also calculates the system mass including the heat source. CCEP uses the subroutine RSMASS, which has been updated to RSMASS-D, to estimate the mass of the reactor. RSMASS was developed in 1986 at Sandia National Laboratories to quickly estimate the mass of multi-megawatt nuclear reactors for space applications. In response to an emphasis for lower power reactors, RSMASS-D was developed in 1997 and is based off of the SP-100 liquid metal cooled reactor. The subroutine calculates the mass of reactor components such as the safety systems, instrumentation and control, radiation shield, structure, reflector, and core. The major improvements in RSMASS-D are that it uses higher fidelity calculations, is easier to use, and automatically optimizes the systems mass. RSMASS-D is accurate within 15% of actual data while RSMASS is only accurate within 50%. My goal this summer was to learn FORTRAN 77 programming language and update the CCEP program with the RSMASS-D model.

  16. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change

    Science.gov (United States)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.

    2017-12-01

    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  17. Crop model improvement reduces the uncertainty of the response to temperature of multi-model ensembles

    DEFF Research Database (Denmark)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold

    2017-01-01

    of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA...... ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures >24 °C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME...

  18. A quality improvement management model for renal care.

    Science.gov (United States)

    Vlchek, D L; Day, L M

    1991-04-01

    The purpose of this article is to explore the potential for applying the theory and tools of quality improvement (total quality management) in the renal care setting. We believe that the coupling of the statistical techniques used in the Deming method of quality improvement, with modern approaches to outcome and process analysis, will provide the renal care community with powerful tools, not only for improved quality (i.e., reduced morbidity and mortality), but also for technology evaluation and resource allocation.