WorldWideScience

Sample records for model similar improvements

  1. An Improved Modeling for Network Traffic Based on Alpha-Stable Self-similar Processes

    Institute of Scientific and Technical Information of China (English)

    GEXiaohu; ZHUGuangxi; ZHUYaoting

    2003-01-01

    This paper produces an improved model based on alpha-stable processes. First, this paper introduces the basic of self-similarity, and then the reason why the alpha-stable processes have been used for self-similar network traffic modeling is given out; Second, the research in this field is advanced, and the paper analyzes the drawback of the S4 model, which is supported by the related mathematical proof and confirmations of experiments. In order to make up for the drawback of the S4 model andaccurately describe the varieties of the heavily tailed distributions, an improved network traffic model is proposed. By comparison with simulation data (including the S4 model and the improved model) and actual data, the advantage of the improved model has been demonstrated. In the end, the significance of the self-similar network traffic model has been put forward, and the future work is discussed.

  2. Improved similarity criterion for seepage erosion using mesoscopic coupled PFC-CFD model

    Institute of Scientific and Technical Information of China (English)

    倪小东; 王媛; 陈珂; 赵帅龙

    2015-01-01

    Conventional model tests and centrifuge tests are frequently used to investigate seepage erosion. However, the centrifugal test method may not be efficient according to the results of hydraulic conductivity tests and piping erosion tests. The reason why seepage deformation in model tests may deviate from similarity was first discussed in this work. Then, the similarity criterion for seepage deformation in porous media was improved based on the extended Darcy-Brinkman-Forchheimer equation. Finally, the coupled particle flow code–computational fluid dynamics (PFC−CFD) model at the mesoscopic level was proposed to verify the derived similarity criterion. The proposed model maximizes its potential to simulate seepage erosion via the discrete element method and satisfy the similarity criterion by adjusting particle size. The numerical simulations achieved identical results with the prototype, thus indicating that the PFC−CFD model that satisfies the improved similarity criterion can accurately reproduce the processes of seepage erosion at the mesoscopic level.

  3. Improving Detection of Arrhythmia Drug-Drug Interactions in Pharmacovigilance Data through the Implementation of Similarity-Based Modeling.

    Directory of Open Access Journals (Sweden)

    Santiago Vilar

    Full Text Available Identification of Drug-Drug Interactions (DDIs is a significant challenge during drug development and clinical practice. DDIs are responsible for many adverse drug effects (ADEs, decreasing patient quality of life and causing higher care expenses. DDIs are not systematically evaluated in pre-clinical or clinical trials and so the FDA U. S. Food and Drug Administration relies on post-marketing surveillance to monitor patient safety. However, existing pharmacovigilance algorithms show poor performance for detecting DDIs exhibiting prohibitively high false positive rates. Alternatively, methods based on chemical structure and pharmacological similarity have shown promise in adverse drug event detection. We hypothesize that the use of chemical biology data in a post hoc analysis of pharmacovigilance results will significantly improve the detection of dangerous interactions. Our model integrates a reference standard of DDIs known to cause arrhythmias with drug similarity data. To compare similarity between drugs we used chemical structure (both 2D and 3D molecular structure, adverse drug side effects, chemogenomic targets, drug indication classes, and known drug-drug interactions. We evaluated the method on external reference standards. Our results showed an enhancement of sensitivity, specificity and precision in different top positions with the use of similarity measures to rank the candidates extracted from pharmacovigilance data. For the top 100 DDI candidates, similarity-based modeling yielded close to twofold precision enhancement compared to the proportional reporting ratio (PRR. Moreover, the method helps in the DDI decision making through the identification of the DDI in the reference standard that generated the candidate.

  4. Modeling of similar economies

    Directory of Open Access Journals (Sweden)

    Sergey B. Kuznetsov

    2017-06-01

    Full Text Available Objective to obtain dimensionless criteria ndash economic indices characterizing the national economy and not depending on its size. Methods mathematical modeling theory of dimensions processing statistical data. Results basing on differential equations describing the national economy with the account of economical environment resistance two dimensionless criteria are obtained which allow to compare economies regardless of their sizes. With the theory of dimensions we show that the obtained indices are not accidental. We demonstrate the implementation of the obtained dimensionless criteria for the analysis of behavior of certain countriesrsquo economies. Scientific novelty the dimensionless criteria are obtained ndash economic indices which allow to compare economies regardless of their sizes and to analyze the dynamic changes in the economies with time. nbsp Practical significance the obtained results can be used for dynamic and comparative analysis of different countriesrsquo economies regardless of their sizes.

  5. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  6. a Comparison of Semantic Similarity Models in Evaluating Concept Similarity

    Science.gov (United States)

    Xu, Q. X.; Shi, W. Z.

    2012-08-01

    The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  7. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  8. Continuous Improvement and Collaborative Improvement: Similarities and Differences

    DEFF Research Database (Denmark)

    Middel, Rick; Boer, Harry; Fisscher, Olaf

    2006-01-01

    A substantial body of theoretical and practical knowledge has been developed on continuous improvement. However, there is still a considerable lack of impirically grounded contributions and theories on collaborative improvement, that is, continuous improvement in an interorganizational setting....... The CO-IMPROVE project investigated whether and how the concept of continuous improvement can be extended and transferred to such settings. The objective of this article is ti evaluate the CO-IMPROVE research findings in view of existing theories on continuous innovation. The article investigates...

  9. Continuous Improvement and Collaborative Improvement: Similarities and Differences

    NARCIS (Netherlands)

    Middel, Rick; Boer, Harry; Fisscher, Olaf

    2006-01-01

    A substantial body of theoretical and practical knowledge has been developed on continuous improvement. However, there is still a considerable lack of empirically grounded contributions and theories on collaborative improvement, that is, continuous improvement in an inter-organizational setting. The

  10. and Models: A Self-Similar Approach

    Directory of Open Access Journals (Sweden)

    José Antonio Belinchón

    2013-01-01

    equations (FEs admit self-similar solutions. The methods employed allow us to obtain general results that are valid not only for the FRW metric, but also for all the Bianchi types as well as for the Kantowski-Sachs model (under the self-similarity hypothesis and the power-law hypothesis for the scale factors.

  11. Similarity and Modeling in Science and Engineering

    CERN Document Server

    Kuneš, Josef

    2012-01-01

    The present text sets itself in relief to other titles on the subject in that it addresses the means and methodologies versus a narrow specific-task oriented approach. Concepts and their developments which evolved to meet the changing needs of applications are addressed. This approach provides the reader with a general tool-box to apply to their specific needs. Two important tools are presented: dimensional analysis and the similarity analysis methods. The fundamental point of view, enabling one to sort all models, is that of information flux between a model and an original expressed by the similarity and abstraction. Each chapter includes original examples and ap-plications. In this respect, the models can be divided into several groups. The following models are dealt with separately by chapter; mathematical and physical models, physical analogues, deterministic, stochastic, and cybernetic computer models. The mathematical models are divided into asymptotic and phenomenological models. The phenomenological m...

  12. Creating Birds of Similar Feathers: Leveraging Similarity to Improve Teacher-Student Relationships and Academic Achievement

    Science.gov (United States)

    Gehlbach, Hunter; Brinkworth, Maureen E.; King, Aaron M.; Hsu, Laura M.; McIntyre, Joseph; Rogers, Todd

    2016-01-01

    When people perceive themselves as similar to others, greater liking and closer relationships typically result. In the first randomized field experiment that leverages actual similarities to improve real-world relationships, we examined the affiliations between 315 9th grade students and their 25 teachers. Students in the treatment condition…

  13. Improved personalized recommendation based on a similarity network

    Science.gov (United States)

    Wang, Ximeng; Liu, Yun; Xiong, Fei

    2016-08-01

    A recommender system helps individual users find the preferred items rapidly and has attracted extensive attention in recent years. Many successful recommendation algorithms are designed on bipartite networks, such as network-based inference or heat conduction. However, most of these algorithms define the resource-allocation methods for an average allocation. That is not reasonable because average allocation cannot indicate the user choice preference and the influence between users which leads to a series of non-personalized recommendation results. We propose a personalized recommendation approach that combines the similarity function and bipartite network to generate a similarity network that improves the resource-allocation process. Our model introduces user influence into the recommender system and states that the user influence can make the resource-allocation process more reasonable. We use four different metrics to evaluate our algorithms for three benchmark data sets. Experimental results show that the improved recommendation on a similarity network can obtain better accuracy and diversity than some competing approaches.

  14. Integrated Semantic Similarity Model Based on Ontology

    Institute of Scientific and Technical Information of China (English)

    LIU Ya-Jun; ZHAO Yun

    2004-01-01

    To solve the problem of the inadequacy of semantic processing in the intelligent question answering system, an integrated semantic similarity model which calculates the semantic similarity using the geometric distance and information content is presented in this paper.With the help of interrelationship between concepts, the information content of concepts and the strength of the edges in the ontology network, we can calculate the semantic similarity between two concepts and provide information for the further calculation of the semantic similarity between user's question and answers in knowlegdge base.The results of the experiments on the prototype have shown that the semantic problem in natural language processing can also be solved with the help of the knowledge and the abundant semantic information in ontology.More than 90% accuracy with less than 50 ms average searching time in the intelligent question answering prototype system based on ontology has been reached.The result is very satisfied.

  15. Design of ontology mapping framework and improvement of similarity computation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Ontology heterogeneity is the primary obstacle for interoperation of ontologies.Ontology mapping is the best way to solve this problem.The key of ontology mapping is the similarity computation.At present,the method of similarity computation is imperfect.And the computation quantity is high.To solve these problems,an ontology-mapping framework with a kind of hybrid architecture is put forward.with an improvement in the method of similarity computation.Different areas have different local ontologies.Two ontologies are taken as examples,to explain the specific mapping framework and improved method of similarity computation.These two ontologies are about classes and teachers in a university.The experimental results show that using this framework and improved method can increase the accuracy of computation to a certain extent.Otherwise,the quantity of computation can be decreased.

  16. Face and body recognition show similar improvement during childhood.

    Science.gov (United States)

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition.

  17. Similarity

    Science.gov (United States)

    Apostol, Tom M. (Editor)

    1990-01-01

    In this 'Project Mathematics! series, sponsored by the California Institute for Technology (CalTech), the mathematical concept of similarity is presented. he history of and real life applications are discussed using actual film footage and computer animation. Terms used and various concepts of size, shape, ratio, area, and volume are demonstrated. The similarity of polygons, solids, congruent triangles, internal ratios, perimeters, and line segments using the previous mentioned concepts are shown.

  18. Improved similarity trees and their application to visual data classification.

    Science.gov (United States)

    Paiva, Jose Gustavo S; Florian-Cruz, Laura; Pedrini, Helio; Telles, Guilherme P; Minghim, Rosane

    2011-12-01

    An alternative form to multidimensional projections for the visual analysis of data represented in multidimensional spaces is the deployment of similarity trees, such as Neighbor Joining trees. They organize data objects on the visual plane emphasizing their levels of similarity with high capability of detecting and separating groups and subgroups of objects. Besides this similarity-based hierarchical data organization, some of their advantages include the ability to decrease point clutter; high precision; and a consistent view of the data set during focusing, offering a very intuitive way to view the general structure of the data set as well as to drill down to groups and subgroups of interest. Disadvantages of similarity trees based on neighbor joining strategies include their computational cost and the presence of virtual nodes that utilize too much of the visual space. This paper presents a highly improved version of the similarity tree technique. The improvements in the technique are given by two procedures. The first is a strategy that replaces virtual nodes by promoting real leaf nodes to their place, saving large portions of space in the display and maintaining the expressiveness and precision of the technique. The second improvement is an implementation that significantly accelerates the algorithm, impacting its use for larger data sets. We also illustrate the applicability of the technique in visual data mining, showing its advantages to support visual classification of data sets, with special attention to the case of image classification. We demonstrate the capabilities of the tree for analysis and iterative manipulation and employ those capabilities to support evolving to a satisfactory data organization and classification.

  19. Learning faces: similar comparator faces do not improve performance.

    Directory of Open Access Journals (Sweden)

    Scott P Jones

    Full Text Available Recent evidence indicates that comparison of two similar faces can aid subsequent discrimination between them. However, the fact that discrimination between two faces is facilitated by comparing them directly does not demonstrate that comparison produces a general improvement in the processing of faces. It remains an open question whether the opportunity to compare a "target" face to similar faces can facilitate the discrimination of the exposed target face from other nonexposed faces. In Experiment 1, selection of a target face from an array of novel foils was not facilitated by intermixed exposure to the target and comparators of the same sex. Experiment 2 also found no advantage for similar comparators (morphed towards the target over unmorphed same sex comparators, or over repeated target exposure alone. But all repeated exposure conditions produced better performance than a single brief presentation of the target. Experiment 3 again demonstrated that repeated exposure produced equivalent learning in same sex and different sex comparator conditions, and also showed that increasing the number of same sex or different sex comparators failed to improve identification. In all three experiments, exposure to a target alongside similar comparators failed to support selection of the target from novel test stimuli to a greater degree than exposure alongside dissimilar comparators or repeated target exposure alone. The current results suggest that the facilitatory effects of comparison during exposure may be limited to improving discrimination between exposed stimuli, and thus our results do not support the idea that providing the opportunity for comparison is a practical means for improving face identification.

  20. Improvement of Similarity Measure: Pearson Product-Moment Correlation Coefficient

    Institute of Scientific and Technical Information of China (English)

    LIUYong-suo; MENGQing-hua; CHENRong; WANGJian-song; JIANGShu-min; HUYu-zhu

    2004-01-01

    Aim To study the reason of the insensitiveness of Pearson preduct-moment correlation coefficient as a similarity measure and the method to improve its sensitivity. Methods Experimental and simulated data sets were used. Results The distribution range of the data sets influences the sensitivity of Pearson product-moment correlation coefficient. Weighted Pearson product-moment correlation coefficient is more sensitive when the range of the data set is large. Conclusion Weighted Pearson product-moment correlation coefficient is necessary when the range of the data set is large.

  1. Improvement of water distribution networks analysis by topological similarity

    OpenAIRE

    Mahavir Singh; Suraj Krishan Kheer; I.K. Pandita

    2016-01-01

    In this research paper a methodology based on topological similarity is used to obtain starting point of iteration for solving reservoir and pipe network problems. As of now initial starting point for iteration is based on pure guess work which may be supported by experience. Topological similarity concept comes from the Principle of Quasi Work (PQW). In PQW the solution of any one problem of a class is used to solve other complex problems of the same class. This paves way for arriving at a u...

  2. A synthesis of similarity and eddy-viscosity models

    NARCIS (Netherlands)

    Verstappen, R.; Friedrich, R; Geurts, BJ; Metais, O

    2004-01-01

    In large-eddy simulation, a low-pass spatial filter is usually applied to the Navier-Stokes equations. The resulting commutator of the filter and the nonlinear term is usually modelled by an eddy-viscosity model, by a similarity model or by a mix thereof. Similarity models possess the proper mathema

  3. Agile rediscovering values: Similarities to continuous improvement strategies

    Science.gov (United States)

    Díaz de Mera, P.; Arenas, J. M.; González, C.

    2012-04-01

    Research in the late 80's on technological companies that develop products of high value innovation, with sufficient speed and flexibility to adapt quickly to changing market conditions, gave rise to the new set of methodologies known as Agile Management Approach. In the current changing economic scenario, we considered very interesting to study the similarities of these Agile Methodologies with other practices whose effectiveness has been amply demonstrated in both the West and Japan. Strategies such as Kaizen, Lean, World Class Manufacturing, Concurrent Engineering, etc, would be analyzed to check the values they have in common with the Agile Approach.

  4. Modeling of Hysteresis in Piezoelectric Actuator Based on Segment Similarity

    Directory of Open Access Journals (Sweden)

    Rui Xiong

    2015-11-01

    Full Text Available To successfully exploit the full potential of piezoelectric actuators in micro/nano positioning systems, it is essential to model their hysteresis behavior accurately. A novel hysteresis model for piezoelectric actuator is proposed in this paper. Firstly, segment-similarity, which describes the similarity relationship between hysteresis curve segments with different turning points, is proposed. Time-scale similarity, which describes the similarity relationship between hysteresis curves with different rates, is used to solve the problem of dynamic effect. The proposed model is formulated using these similarities. Finally, the experiments are performed with respect to a micro/nano-meter movement platform system. The effectiveness of the proposed model is verified as compared with the Preisach model. The experimental results show that the proposed model is able to precisely predict the hysteresis trajectories of piezoelectric actuators and performs better than the Preisach model.

  5. Personalized Predictive Modeling and Risk Factor Identification using Patient Similarity.

    Science.gov (United States)

    Ng, Kenney; Sun, Jimeng; Hu, Jianying; Wang, Fei

    2015-01-01

    Personalized predictive models are customized for an individual patient and trained using information from similar patients. Compared to global models trained on all patients, they have the potential to produce more accurate risk scores and capture more relevant risk factors for individual patients. This paper presents an approach for building personalized predictive models and generating personalized risk factor profiles. A locally supervised metric learning (LSML) similarity measure is trained for diabetes onset and used to find clinically similar patients. Personalized risk profiles are created by analyzing the parameters of the trained personalized logistic regression models. A 15,000 patient data set, derived from electronic health records, is used to evaluate the approach. The predictive results show that the personalized models can outperform the global model. Cluster analysis of the risk profiles show groups of patients with similar risk factors, differences in the top risk factors for different groups of patients and differences between the individual and global risk factors.

  6. Self-similar infall models for cold dark matter haloes

    Science.gov (United States)

    Le Delliou, Morgan Patrick

    2002-04-01

    How can we understand the mechanisms for relaxation and the constitution of the density profile in CDM halo formation? Can the old Self-Similar Infall Model (SSIM) be made to contain all the elements essential for this understanding? In this work, we have explored and improved the SSIM, showing it can at once explain large N-body simulations and indirect observations of real haloes alike. With the use of a carefully-crafted simple shell code, we have followed the accretion of secondary infalls in different settings, ranging from a model for mergers to a distribution of angular momentum for the shells, through the modeling of a central black hole. We did not assume self-similar accretion from initial conditions but allowed for it to develop and used coordinates that make it evident. We found self-similar accretion to appear very prominently in CDM halo formation as an intermediate stable (quasi-equilibrium) stage of Large Scale Structure formation. Dark Matter haloes density profiles are shown to be primarily influenced by non-radial motion. The merger paradigm reveals itself through the SSIM to be a secondary but non-trivial factor in those density profiles: it drives the halo profile towards a unique attractor, but the main factor for universality is still the self-similarity. The innermost density cusp flattening observed in some dwarf and Low Surface Brightness galaxies finds a natural and simple explanation in the SSIM embedding a central black hole. Relaxation in cold collisionless collapse is clarified by the SSIM. It is a continuous process involving only the newly-accreted particles for just a few dynamical times. All memory of initial energy is not lost so relaxation is only moderately violent. A sharp cut off, or population inversion, originates in initial conditions and is maintained through relaxation. It characterises moderately violent relaxation in the system's Distribution Function. Finally, the SSIM has shown this relaxation to arise from phase

  7. [Evaluation and improvement of a measure of drug name similarity, vwhtfrag, in relation to subjective similarities and experimental error rates].

    Science.gov (United States)

    Tamaki, Hirofumi; Satoh, Hiroki; Hori, Satoko; Sawada, Yasufumi

    2012-01-01

    Confusion of drug names is one of the most common causes of drug-related medical errors. A similarity measure of drug names, "vwhtfrag", was developed to discriminate whether drug name pairs are likely to cause confusion errors, and to provide information that would be helpful to avoid errors. The aim of the present study was to evaluate and improve vwhtfrag. Firstly, we evaluated the correlation of vwhtfrag with subjective similarity or error rate of drug name pairs in psychological experiments. Vwhtfrag showed a higher correlation to subjective similarity (college students: r=0.84) or error rate than did other conventional similarity measures (htco, cos1, edit). Moreover, name pairs that showed coincidences of the initial character strings had a higher subjective similarity than those which had coincidences of the end character strings and had the same vwhtfrag. Therefore, we developed a new similarity measure (vwhtfrag+), in which coincidence of initial character strings in name pairs is weighted by 1.53 times over coincidence of end character strings. Vwhtfrag+ showed a higher correlation to subjective similarity than did unmodified vwhtfrag. Further studies appear warranted to examine in detail whether vwhtfrag+ has superior ability to discriminate drug name pairs likely to cause confusion errors.

  8. Model Wind Turbines Tested at Full-Scale Similarity

    Science.gov (United States)

    Miller, M. A.; Kiefer, J.; Westergaard, C.; Hultmark, M.

    2016-09-01

    The enormous length scales associated with modern wind turbines complicate any efforts to predict their mechanical loads and performance. Both experiments and numerical simulations are constrained by the large Reynolds numbers governing the full- scale aerodynamics. The limited fundamental understanding of Reynolds number effects in combination with the lack of empirical data affects our ability to predict, model, and design improved turbines and wind farms. A new experimental approach is presented, which utilizes a highly pressurized wind tunnel (up to 220 bar). It allows exact matching of the Reynolds numbers (no matter how it is defined), tip speed ratios, and Mach numbers on a geometrically similar, small-scale model. The design of a measurement and instrumentation stack to control the turbine and measure the loads in the pressurized environment is discussed. Results are then presented in the form of power coefficients as a function of Reynolds number and Tip Speed Ratio. Due to gearbox power loss, a preliminary study has also been completed to find the gearbox efficiency and the resulting correction has been applied to the data set.

  9. Self-Similar Symmetry Model and Cosmic Microwave Background

    Directory of Open Access Journals (Sweden)

    Tomohide eSonoda

    2016-05-01

    Full Text Available In this paper, we present the self-similar symmetry (SSS model that describes the hierarchical structure of the universe. The model is based on the concept of self-similarity, which explains the symmetry of the cosmic microwave background (CMB. The approximate length and time scales of the six hierarchies of the universe---grand unification, electroweak unification, the atom, the pulsar, the solar system, and the galactic system---are derived from the SSS model. In addition, the model implies that the electron mass and gravitational constant could vary with the CMB radiation temperature.

  10. Simulation and similarity using models to understand the world

    CERN Document Server

    Weisberg, Michael

    2013-01-01

    In the 1950s, John Reber convinced many Californians that the best way to solve the state's water shortage problem was to dam up the San Francisco Bay. Against massive political pressure, Reber's opponents persuaded lawmakers that doing so would lead to disaster. They did this not by empirical measurement alone, but also through the construction of a model. Simulation and Similarity explains why this was a good strategy while simultaneously providing an account of modeling and idealization in modern scientific practice. Michael Weisberg focuses on concrete, mathematical, and computational models in his consideration of the nature of models, the practice of modeling, and nature of the relationship between models and real-world phenomena. In addition to a careful analysis of physical, computational, and mathematical models, Simulation and Similarity offers a novel account of the model/world relationship. Breaking with the dominant tradition, which favors the analysis of this relation through logical notions suc...

  11. Burridge-Knopoff model and self-similarity

    CERN Document Server

    Akishin, P G; Budnik, A D; Ivanov, V V; Antoniou, I

    1997-01-01

    The seismic processes are well known to be self-similar in both spatial and temporal behavior. At the same time, the Burridge-Knopoff (BK) model of earthquake fault dynamics, one of the basic models of theoretical seismicity, does not posses self-similarity. In this article an extension of BK model, which directly accounts for the self-similarity of earth crust elastic properties by introducing nonlinear terms for inter-block springs of BK model, is presented. The phase space analysis of the model have shown it to behave like a system of coupled randomly kicked oscillators. The nonlinear stiffness terms cause the synchronization of collective motion and produce stronger seismic events.

  12. Similar Constructive Method for Solving a nonlinearly Spherical Percolation Model

    Directory of Open Access Journals (Sweden)

    WANG Yong

    2013-01-01

    Full Text Available In the view of nonlinear spherical percolation problem of dual porosity reservoir, a mathematical model considering three types of outer boundary conditions: closed, constant pressure, infinity was established in this paper. The mathematical model was linearized by substitution of variable and became a boundary value problem of ordinary differential equation in Laplace space by Laplace transformation. It was verified that such boundary value problem with one type of outer boundary had a similar structure of solution. And a new method: Similar Constructive Method was obtained for solving such boundary value problem. By this method, solutions with similar structure in other two outer boundary conditions were obtained. The Similar Constructive Method raises efficiency of solving such percolation model.

  13. THE IMPROVED XINANJIANG MODEL

    Institute of Scientific and Technical Information of China (English)

    LI Zhi-jia; YAO Cheng; KONG Xiang-guang

    2005-01-01

    To improve the Xinanjiang model, the runoff generating from infiltration-excess is added to the model.The another 6 parameters are added to Xinanjiang model.In principle, the improved Xinanjiang model can be used to simulate runoff in the humid, semi-humid and also semi-arid regions.The application in Yi River shows the improved Xinanjiang model could forecast discharge with higher accuracy and can satisfy the practical requirements.It also shows that the improved model is reasonable.

  14. Bounding SAR ATR performance based on model similarity

    Science.gov (United States)

    Boshra, Michael; Bhanu, Bir

    1999-08-01

    Similarity between model targets plays a fundamental role in determining the performance of target recognition. We analyze the effect of model similarity on the performance of a vote- based approach for target recognition from SAR images. In such an approach, each model target is represented by a set of SAR views sampled at a variety of azimuth angles and a specific depression angle. Both model and data views are represented by locations of scattering centers, which are peak features. The model hypothesis (view of a specific target and associated location) corresponding to a given data view is chosen to be the one with the highest number of data-supported model features (votes). We address three issues in this paper. Firstly, we present a quantitative measure of the similarity between a pair of model views. Such a measure depends on the degree of structural overlap between the two views, and the amount of uncertainty. Secondly, we describe a similarity- based framework for predicting an upper bound on recognition performance in the presence of uncertainty, occlusion and clutter. Thirdly, we validate the proposed framework using MSTAR public data, which are obtained under different depression angles, configurations and articulations.

  15. Exploring information from the topology beneath the Gene Ontology terms to improve semantic similarity measures.

    Science.gov (United States)

    Zhang, Shu-Bo; Lai, Jian-Huang

    2016-07-15

    Measuring the similarity between pairs of biological entities is important in molecular biology. The introduction of Gene Ontology (GO) provides us with a promising approach to quantifying the semantic similarity between two genes or gene products. This kind of similarity measure is closely associated with the GO terms annotated to biological entities under consideration and the structure of the GO graph. However, previous works in this field mainly focused on the upper part of the graph, and seldom concerned about the lower part. In this study, we aim to explore information from the lower part of the GO graph for better semantic similarity. We proposed a framework to quantify the similarity measure beneath a term pair, which takes into account both the information two ancestral terms share and the probability that they co-occur with their common descendants. The effectiveness of our approach was evaluated against seven typical measurements on public platform CESSM, protein-protein interaction and gene expression datasets. Experimental results consistently show that the similarity derived from the lower part contributes to better semantic similarity measure. The promising features of our approach are the following: (1) it provides a mirror model to characterize the information two ancestral terms share with respect to their common descendant; (2) it quantifies the probability that two terms co-occur with their common descendant in an efficient way; and (3) our framework can effectively capture the similarity measure beneath two terms, which can serve as an add-on to improve traditional semantic similarity measure between two GO terms. The algorithm was implemented in Matlab and is freely available from http://ejl.org.cn/bio/GOBeneath/. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    Energy Technology Data Exchange (ETDEWEB)

    Ovacik, Meric A. [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Androulakis, Ioannis P., E-mail: yannis@rci.rutgers.edu [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Biomedical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States)

    2013-09-15

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy.

  17. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    Science.gov (United States)

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine

  18. Morphological similarities between DBM and a microeconomic model of sprawl

    Science.gov (United States)

    Caruso, Geoffrey; Vuidel, Gilles; Cavailhès, Jean; Frankhauser, Pierre; Peeters, Dominique; Thomas, Isabelle

    2011-03-01

    We present a model that simulates the growth of a metropolitan area on a 2D lattice. The model is dynamic and based on microeconomics. Households show preferences for nearby open spaces and neighbourhood density. They compete on the land market. They travel along a road network to access the CBD. A planner ensures the connectedness and maintenance of the road network. The spatial pattern of houses, green spaces and road network self-organises, emerging from agents individualistic decisions. We perform several simulations and vary residential preferences. Our results show morphologies and transition phases that are similar to Dieletric Breakdown Models (DBM). Such similarities were observed earlier by other authors, but we show here that it can be deducted from the functioning of the land market and thus explicitly connected to urban economic theory.

  19. Automated Student Model Improvement

    Science.gov (United States)

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  20. Similarity-based semi-local estimation of EMOS models

    CERN Document Server

    Lerch, Sebastian

    2015-01-01

    Weather forecasts are typically given in the form of forecast ensembles obtained from multiple runs of numerical weather prediction models with varying initial conditions and physics parameterizations. Such ensemble predictions tend to be biased and underdispersive and thus require statistical postprocessing. In the ensemble model output statistics (EMOS) approach, a probabilistic forecast is given by a single parametric distribution with parameters depending on the ensemble members. This article proposes two semi-local methods for estimating the EMOS coefficients where the training data for a specific observation station are augmented with corresponding forecast cases from stations with similar characteristics. Similarities between stations are determined using either distance functions or clustering based on various features of the climatology, forecast errors, ensemble predictions and locations of the observation stations. In a case study on wind speed over Europe with forecasts from the Grand Limited Area...

  1. Lie algebraic similarity transformed Hamiltonians for lattice model systems

    Science.gov (United States)

    Wahlen-Strothman, Jacob M.; Jiménez-Hoyos, Carlos A.; Henderson, Thomas M.; Scuseria, Gustavo E.

    2015-01-01

    We present a class of Lie algebraic similarity transformations generated by exponentials of two-body on-site Hermitian operators whose Hausdorff series can be summed exactly without truncation. The correlators are defined over the entire lattice and include the Gutzwiller factor ni ↑ni ↓ , and two-site products of density (ni ↑+ni ↓) and spin (ni ↑-ni ↓) operators. The resulting non-Hermitian many-body Hamiltonian can be solved in a biorthogonal mean-field approach with polynomial computational cost. The proposed similarity transformation generates locally weighted orbital transformations of the reference determinant. Although the energy of the model is unbound, projective equations in the spirit of coupled cluster theory lead to well-defined solutions. The theory is tested on the one- and two-dimensional repulsive Hubbard model where it yields accurate results for small and medium sized interaction strengths.

  2. A Fuzzy Similarity Based Concept Mining Model for Text Classification

    CERN Document Server

    Puri, Shalini

    2012-01-01

    Text Classification is a challenging and a red hot field in the current scenario and has great importance in text categorization applications. A lot of research work has been done in this field but there is a need to categorize a collection of text documents into mutually exclusive categories by extracting the concepts or features using supervised learning paradigm and different classification algorithms. In this paper, a new Fuzzy Similarity Based Concept Mining Model (FSCMM) is proposed to classify a set of text documents into pre - defined Category Groups (CG) by providing them training and preparing on the sentence, document and integrated corpora levels along with feature reduction, ambiguity removal on each level to achieve high system performance. Fuzzy Feature Category Similarity Analyzer (FFCSA) is used to analyze each extracted feature of Integrated Corpora Feature Vector (ICFV) with the corresponding categories or classes. This model uses Support Vector Machine Classifier (SVMC) to classify correct...

  3. Face Recognition Performance Improvement using a Similarity Score of Feature Vectors based on Probabilistic Histograms

    Directory of Open Access Journals (Sweden)

    SRIKOTE, G.

    2016-08-01

    Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.

  4. APPLICABILITY OF SIMILARITY CONDITIONS TO ANALOGUE MODELLING OF TECTONIC STRUCTURES

    Directory of Open Access Journals (Sweden)

    Mikhail A. Goncharov

    2015-09-01

    Full Text Available The publication is aimed at comparing concepts of V.V. Belousov and M.V. Gzovsky, outstanding researchers who established fundamentals of tectonophysics in Russia, specifically similarity conditions in application to tectonophysical modeling. Quotations from their publications illustrate differences in their views. In this respect, we can reckon V.V. Belousov as a «realist» as he supported «the liberal point of view» [Methods of modelling…, 1988, p. 21–22], whereas M.V. Gzovsky can be regarded as an «idealist» as he believed that similarity conditions should be mandatorily applied to ensure correctness of physical modeling of tectonic deformations and structures [Gzovsky, 1975, pp. 88 and 94].Objectives of the present publication are (1 to be another reminder about desirability of compliance with similarity conditions in experimental tectonics; (2 to point out difficulties in ensuring such compliance; (3 to give examples which bring out the fact that similarity conditions are often met per se, i.e. automatically observed; (4 to show that modeling can be simplified in some cases without compromising quantitative estimations of parameters of structure formation.(1 Physical modelling of tectonic deformations and structures should be conducted, if possible, in compliance with conditions of geometric and physical similarity between experimental models and corresponding natural objects. In any case, a researcher should have a clear vision of conditions applicable to each particular experiment.(2 Application of similarity conditions is often challenging due to unavoidable difficulties caused by the following: a Imperfection of experimental equipment and technologies (Fig. 1 to 3; b uncertainties in estimating parameters of formation of natural structures, including main ones: structure size (Fig. 4, time of formation (Fig. 5, deformation properties of the medium wherein such structures are formed, including, first of all, viscosity (Fig. 6

  5. Self-similar two-particle separation model

    DEFF Research Database (Denmark)

    Lüthi, Beat; Berg, Jacob; Ott, Søren

    2007-01-01

    We present a new stochastic model for relative two-particle separation in turbulence. Inspired by material line stretching, we suggest that a similar process also occurs beyond the viscous range, with time scaling according to the longitudinal second-order structure function S2(r), e.g.; in the i......We present a new stochastic model for relative two-particle separation in turbulence. Inspired by material line stretching, we suggest that a similar process also occurs beyond the viscous range, with time scaling according to the longitudinal second-order structure function S2(r), e.......g.; in the inertial range as epsilon−1/3r2/3. Particle separation is modeled as a Gaussian process without invoking information of Eulerian acceleration statistics or of precise shapes of Eulerian velocity distribution functions. The time scale is a function of S2(r) and thus of the Lagrangian evolving separation....... The model predictions agree with numerical and experimental results for various initial particle separations. We present model results for fixed time and fixed scale statistics. We find that for the Richardson-Obukhov law, i.e., =gepsilont3, to hold and to also be observed in experiments, high Reynolds...

  6. TAR:an improved process similarity measure based on unfolding of Petri nets

    Institute of Scientific and Technical Information of China (English)

    WANG Wen-xing; WANG Jian-min

    2012-01-01

    Determining the similarity degree between process models was very important for their management, reuse, and analy- sis. Current approaches either focused on process model's structural aspect, or had inefficiency or imprecision in behavioral simi- larity. Aiming at these problems, a novel similarity measure which extended an existing method named Transition Adjacent Rela- tion (TAR) with improved precision and efficiency named TAR" was proposed. The ability of measuring similarity was extended by eliminating the duplicate tasks without impacting the behaviors. For precision, TARs was classified into repeatable and unre- peatable ones to identify whether a TAR was involved in a loop. Two new kinds of TARs were added, one related to the invisible tasks after the source place and before sink place, and the other representing implicit dependencies. For efficiency, all TARs based on unfolding instead of its reach ability graph of a labeled Petri net were calculated to avoid state space explosion. Experi- ments on artificial and real-world process models showed the effectiveness and effieienev of the DrODosed method_

  7. A framework for similarity recognition of CAD models

    Directory of Open Access Journals (Sweden)

    Leila Zehtaban

    2016-07-01

    Full Text Available A designer is mainly supported by two essential factors in design decisions. These two factors are intelligence and experience aiding the designer by predicting the interconnection between the required design parameters. Through classification of product data and similarity recognition between new and existing designs, it is partially possible to replace the required experience for an inexperienced designer. Given this context, the current paper addresses a framework for recognition and flexible retrieval of similar models in product design. The idea is to establish an infrastructure for transferring design as well as the required PLM (Product Lifecycle Management know-how to the design phase of product development in order to reduce the design time. Furthermore, such a method can be applied as a brainstorming method for a new and creative product development as well. The proposed framework has been tested and benchmarked while showing promising results.

  8. Similarity transformation approach to identifiability analysis of nonlinear compartmental models.

    Science.gov (United States)

    Vajda, S; Godfrey, K R; Rabitz, H

    1989-04-01

    Through use of the local state isomorphism theorem instead of the algebraic equivalence theorem of linear systems theory, the similarity transformation approach is extended to nonlinear models, resulting in finitely verifiable sufficient and necessary conditions for global and local identifiability. The approach requires testing of certain controllability and observability conditions, but in many practical examples these conditions prove very easy to verify. In principle the method also involves nonlinear state variable transformations, but in all of the examples presented in the paper the transformations turn out to be linear. The method is applied to an unidentifiable nonlinear model and a locally identifiable nonlinear model, and these are the first nonlinear models other than bilinear models where the reason for lack of global identifiability is nontrivial. The method is also applied to two models with Michaelis-Menten elimination kinetics, both of considerable importance in pharmacokinetics, and for both of which the complicated nature of the algebraic equations arising from the Taylor series approach has hitherto defeated attempts to establish identifiability results for specific input functions.

  9. A Fuzzy Similarity Based Concept Mining Model for Text Classification

    Directory of Open Access Journals (Sweden)

    Shalini Puri

    2011-11-01

    Full Text Available Text Classification is a challenging and a red hot field in the current scenario and has great importance in text categorization applications. A lot of research work has been done in this field but there is a need to categorize a collection of text documents into mutually exclusive categories by extracting the concepts or features using supervised learning paradigm and different classification algorithms. In this paper, a new Fuzzy Similarity Based Concept Mining Model (FSCMM is proposed to classify a set of text documents into pre - defined Category Groups (CG by providing them training and preparing on the sentence, document and integrated corpora levels along with feature reduction, ambiguity removal on each level to achieve high system performance. Fuzzy Feature Category Similarity Analyzer (FFCSA is used to analyze each extracted feature of Integrated Corpora Feature Vector (ICFV with the corresponding categories or classes. This model uses Support Vector Machine Classifier (SVMC to classify correctly the training data patterns into two groups; i. e., + 1 and – 1, thereby producing accurate and correct results. The proposed model works efficiently and effectively with great performance and high - accuracy results.

  10. Robust hashing with local models for approximate similarity search.

    Science.gov (United States)

    Song, Jingkuan; Yang, Yi; Li, Xuelong; Huang, Zi; Yang, Yang

    2014-07-01

    Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing l2,1 -norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.

  11. Improvement on GM models

    Institute of Scientific and Technical Information of China (English)

    党耀国; 刘思峰; 刘斌

    2004-01-01

    Since grey system theory was established by prof. Deng, GM models and their improvements have all taken the first vector of the original sequence as the initialization, which resulted to deficiency in making use of the latest infor-mation. Based on the principle, which new information should be used fully, we think it is scientific to pay more atten-tion to the new information or endow them a more weigh. So, this paper deals with the GM improvement by taking the n-th vector as the initialization, and gets great improvement in forecasting precision. Last, we validate the practicability and reliability of the moelds with examples.

  12. Improved structural similarity metric for the visible quality measurement of images

    Science.gov (United States)

    Lee, Daeho; Lim, Sungsoo

    2016-11-01

    The visible quality assessment of images is important to evaluate the performance of image processing methods such as image correction, compressing, and enhancement. The structural similarity is widely used to determine the visible quality; however, existing structural similarity metrics cannot correctly assess the perceived human visibility of images that have been slightly geometrically transformed or images that have undergone significant regional distortion. We propose an improved structural similarity metric that is more close to human visible evaluation. Compared with the existing metrics, the proposed method can more correctly evaluate the similarity between an original image and various distorted images.

  13. Sea Ice Detection Based on an Improved Similarity Measurement Method Using Hyperspectral Data

    Science.gov (United States)

    Han, Yanling; Li, Jue; Zhang, Yun; Hong, Zhonghua; Wang, Jing

    2017-01-01

    Hyperspectral remote sensing technology can acquire nearly continuous spectrum information and rich sea ice image information, thus providing an important means of sea ice detection. However, the correlation and redundancy among hyperspectral bands reduce the accuracy of traditional sea ice detection methods. Based on the spectral characteristics of sea ice, this study presents an improved similarity measurement method based on linear prediction (ISMLP) to detect sea ice. First, the first original band with a large amount of information is determined based on mutual information theory. Subsequently, a second original band with the least similarity is chosen by the spectral correlation measuring method. Finally, subsequent bands are selected through the linear prediction method, and a support vector machine classifier model is applied to classify sea ice. In experiments performed on images of Baffin Bay and Bohai Bay, comparative analyses were conducted to compare the proposed method and traditional sea ice detection methods. Our proposed ISMLP method achieved the highest classification accuracies (91.18% and 94.22%) in both experiments. From these results the ISMLP method exhibits better performance overall than other methods and can be effectively applied to hyperspectral sea ice detection. PMID:28505135

  14. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Science.gov (United States)

    Monfardini, Elisabetta; Hadj-Bouziane, Fadila; Meunier, Martine

    2014-01-01

    Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  15. Towards Modelling Variation in Music as Foundation for Similarity

    NARCIS (Netherlands)

    Volk, A.; de Haas, W.B.; van Kranenburg, P.; Cambouropoulos, E.; Tsougras, C.; Mavromatis, P.; Pastiadis, K.

    2012-01-01

    This paper investigates the concept of variation in music from the perspective of music similarity. Music similarity is a central concept in Music Information Retrieval (MIR), however there exists no comprehensive approach to music similarity yet. As a consequence, MIR faces the challenge on how to

  16. Representational similarity encoding for fMRI: Pattern-based synthesis to predict brain activity using stimulus-model-similarities.

    Science.gov (United States)

    Anderson, Andrew James; Zinszer, Benjamin D; Raizada, Rajeev D S

    2016-03-01

    Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns.

  17. QSAR models based on quantum topological molecular similarity.

    Science.gov (United States)

    Popelier, P L A; Smith, P J

    2006-07-01

    A new method called quantum topological molecular similarity (QTMS) was fairly recently proposed [J. Chem. Inf. Comp. Sc., 41, 2001, 764] to construct a variety of medicinal, ecological and physical organic QSAR/QSPRs. QTMS method uses quantum chemical topology (QCT) to define electronic descriptors drawn from modern ab initio wave functions of geometry-optimised molecules. It was shown that the current abundance of computing power can be utilised to inject realistic descriptors into QSAR/QSPRs. In this article we study seven datasets of medicinal interest : the dissociation constants (pK(a)) for a set of substituted imidazolines , the pK(a) of imidazoles , the ability of a set of indole derivatives to displace [(3)H] flunitrazepam from binding to bovine cortical membranes , the influenza inhibition constants for a set of benzimidazoles , the interaction constants for a set of amides and the enzyme liver alcohol dehydrogenase , the natriuretic activity of sulphonamide carbonic anhydrase inhibitors and the toxicity of a series of benzyl alcohols. A partial least square analysis in conjunction with a genetic algorithm delivered excellent models. They are also able to highlight the active site, of the ligand or the molecule whose structure determines the activity. The advantages and limitations of QTMS are discussed.

  18. Similarity Reduction and Integrability for the Nonlinear Wave Equations from EPM Model

    Institute of Scientific and Technical Information of China (English)

    YAN ZhenYa

    2001-01-01

    Four types of similarity reductions are obtained for the nonlinear wave equation arising in the elasto-plasticmicrostructure model by using both the direct method due to Clarkson and Kruskal and the improved direct method due to Lou. As a result, the nonlinear wave equation is not integrable.``

  19. An improved method for scoring protein-protein interactions using semantic similarity within the gene ontology

    Directory of Open Access Journals (Sweden)

    Jain Shobhit

    2010-11-01

    Full Text Available Abstract Background Semantic similarity measures are useful to assess the physiological relevance of protein-protein interactions (PPIs. They quantify similarity between proteins based on their function using annotation systems like the Gene Ontology (GO. Proteins that interact in the cell are likely to be in similar locations or involved in similar biological processes compared to proteins that do not interact. Thus the more semantically similar the gene function annotations are among the interacting proteins, more likely the interaction is physiologically relevant. However, most semantic similarity measures used for PPI confidence assessment do not consider the unequal depth of term hierarchies in different classes of cellular location, molecular function, and biological process ontologies of GO and thus may over-or under-estimate similarity. Results We describe an improved algorithm, Topological Clustering Semantic Similarity (TCSS, to compute semantic similarity between GO terms annotated to proteins in interaction datasets. Our algorithm, considers unequal depth of biological knowledge representation in different branches of the GO graph. The central idea is to divide the GO graph into sub-graphs and score PPIs higher if participating proteins belong to the same sub-graph as compared to if they belong to different sub-graphs. Conclusions The TCSS algorithm performs better than other semantic similarity measurement techniques that we evaluated in terms of their performance on distinguishing true from false protein interactions, and correlation with gene expression and protein families. We show an average improvement of 4.6 times the F1 score over Resnik, the next best method, on our Saccharomyces cerevisiae PPI dataset and 2 times on our Homo sapiens PPI dataset using cellular component, biological process and molecular function GO annotations.

  20. Importance of perceived similarity in improving children's attitudes toward mentally retarded peers.

    Science.gov (United States)

    Siperstein, G N; Chatillon, A C

    1982-03-01

    Effects of perceived similarity on fifth- and sixth-grade children's attitudes toward mentally retarded peers were examined. Children were selected from schools that contained segregated classes of retarded pupils (exposed setting) and schools that had no retarded pupils enrolled (nonexposed). Attitudes were defined in terms of children's affective feelings and behavioral intentions. Results showed that children responded more positively toward a retarded target who was depicted as similar to them than toward one who was not. Unexpectedly, the positive effects of perceived similarity were observed only among children in the exposed schools. Also, girls were more positive toward a female target than boys were to a male target, regardless of whether the target was perceived as similar. The importance of developing strategies based on theories of interpersonal attraction to improve children's attitudes toward their retarded peers was discussed.

  1. A Model-Based Approach to Constructing Music Similarity Functions

    Directory of Open Access Journals (Sweden)

    Lamere Paul

    2007-01-01

    Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  2. Minimal axiom group of similarity-based rough set model

    Institute of Scientific and Technical Information of China (English)

    DAI Jian-hua; PAN Yun-he

    2006-01-01

    Rough set axiomatization is one aspect of rough set study to characterize rough set theory using dependable and minimal axiom groups.Thus,rough set theory can be studied by logic and axiom system methods.The classical rough set theory is based on equivalence relation,but the rough set theory based on similarity relation has wide applications in the real world.To characterize similarity-based rough set theory,an axiom group named S,consisting of 3 axioms,is proposed.The reliability of the axiom group,which shows that characterizing of rough set theory based on similarity relation is rational,is proved.Simultaneously,the minimization of the axiom group,which requests that each axiom is an equation and independent,is proved.The axiom group is helpful to research rough set theory by logic and axiom system methods.

  3. MAC/FAC: A Model of Similarity-Based Retrieval

    Science.gov (United States)

    1994-10-01

    giraffe, donkey] (b) HEAVIER [camel, cow] --- BITE [ dromedary , calf] (c) HEAVIER [camel, cowl --- TALLER [giraffe, donkey] (d) GREATER [WEIGHT(camel...stand a good chance of being matched, depending on the stored similarities between TALLER, HEAVIER, and BITE, camel, dromedary and giraffe, and so on

  4. Similarities between obesity in pets and children : the addiction model

    NARCIS (Netherlands)

    Pretlow, Robert A; Corbee, Ronald J

    2016-01-01

    Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest

  5. Similarities between obesity in pets and children : the addiction model

    NARCIS (Netherlands)

    Pretlow, Robert A; Corbee, Ronald J

    2016-01-01

    Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest

  6. Paper Similarity Detection Method Based on Distance Matrix Model with Row-Column Order Penalty Factor

    Directory of Open Access Journals (Sweden)

    Jun Li

    2014-08-01

    Full Text Available Paper similarity detection depends on grammatical and semantic analysis, word segmentation, similarity detection, document summarization and other technologies, involving multiple disciplines. However, there are some problems in the existing main detection models, such as incomplete segmentation preprocessing specification, impact of the semantic orders on detection, near-synonym evaluation, difficulties in paper backtrack and etc. Therefore, this paper presents a two-step segmentation model of special identifier and Sharpley value specific to above problems, which can improve segmentation accuracy. In the aspect of similarity comparison, a distance matrix model with row-column order penalty factor is proposed, which recognizes new words through search engine exponent. This model integrates the characteristics of vector detection, hamming distance and the longest common substring and carries out detection specific to near-synonyms, word deletion and changes in word order by redefining distance matrix and adding ordinal measures, making sentence similarity detection in terms of semantics and backbone word segmentation more effective. Compared with the traditional paper similarity retrieval, the present method has advantages in accuracy of word segmentation, low computation, reliability and high efficiency, which is of great academic significance in word segmentation, similarity detection and document summarization.

  7. Modelling clinical systemic lupus erythematosus: similarities, differences and success stories.

    Science.gov (United States)

    Celhar, Teja; Fairhurst, Anna-Marie

    2016-12-24

    Mouse models of SLE have been indispensable tools to study disease pathogenesis, to identify genetic susceptibility loci and targets for drug development, and for preclinical testing of novel therapeutics. Recent insights into immunological mechanisms of disease progression have boosted a revival in SLE drug development. Despite promising results in mouse studies, many novel drugs have failed to meet clinical end points. This is probably because of the complexity of the disease, which is driven by polygenic predisposition and diverse environmental factors, resulting in a heterogeneous clinical presentation. Each mouse model recapitulates limited aspects of lupus, especially in terms of the mechanism underlying disease progression. The main mouse models have been fairly successful for the evaluation of broad-acting immunosuppressants. However, the advent of targeted therapeutics calls for a selection of the most appropriate model(s) for testing and, ultimately, identification of patients who will be most likely to respond.

  8. Modelling clinical systemic lupus erythematosus: similarities, differences and success stories

    Science.gov (United States)

    Celhar, Teja

    2017-01-01

    Abstract Mouse models of SLE have been indispensable tools to study disease pathogenesis, to identify genetic susceptibility loci and targets for drug development, and for preclinical testing of novel therapeutics. Recent insights into immunological mechanisms of disease progression have boosted a revival in SLE drug development. Despite promising results in mouse studies, many novel drugs have failed to meet clinical end points. This is probably because of the complexity of the disease, which is driven by polygenic predisposition and diverse environmental factors, resulting in a heterogeneous clinical presentation. Each mouse model recapitulates limited aspects of lupus, especially in terms of the mechanism underlying disease progression. The main mouse models have been fairly successful for the evaluation of broad-acting immunosuppressants. However, the advent of targeted therapeutics calls for a selection of the most appropriate model(s) for testing and, ultimately, identification of patients who will be most likely to respond. PMID:28013204

  9. Similarities between obesity in pets and children: the addiction model

    OpenAIRE

    Pretlow, Robert A.; Corbee, Ronald J.

    2016-01-01

    Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates – for example, poor nutrition and sedentary activity – are being challenged. Obesity interventions in both pets and children have produced modest short-term but poor long-term results. New strategies are needed. A novel theory posits that obesity in pets and children is due to ‘treats’ and excessive meal amounts given by the ‘pet–parent’ and...

  10. RNA and protein 3D structure modeling: similarities and differences.

    Science.gov (United States)

    Rother, Kristian; Rother, Magdalena; Boniecki, Michał; Puton, Tomasz; Bujnicki, Janusz M

    2011-09-01

    In analogy to proteins, the function of RNA depends on its structure and dynamics, which are encoded in the linear sequence. While there are numerous methods for computational prediction of protein 3D structure from sequence, there have been very few such methods for RNA. This review discusses template-based and template-free approaches for macromolecular structure prediction, with special emphasis on comparison between the already tried-and-tested methods for protein structure modeling and the very recently developed "protein-like" modeling methods for RNA. We highlight analogies between many successful methods for modeling of these two types of biological macromolecules and argue that RNA 3D structure can be modeled using "protein-like" methodology. We also highlight the areas where the differences between RNA and proteins require the development of RNA-specific solutions.

  11. Similarity solutions for systems arising from an Aedes aegypti model

    Science.gov (United States)

    Freire, Igor Leite; Torrisi, Mariano

    2014-04-01

    In a recent paper a new model for the Aedes aegypti mosquito dispersal dynamics was proposed and its Lie point symmetries were investigated. According to the carried group classification, the maximal symmetry Lie algebra of the nonlinear cases is reached whenever the advection term vanishes. In this work we analyze the family of systems obtained when the wind effects on the proposed model are neglected. Wide new classes of solutions to the systems under consideration are obtained.

  12. A little similarity goes a long way: the effects of peripheral but self-revealing similarities on improving and sustaining interracial relationships.

    Science.gov (United States)

    West, Tessa V; Magee, Joe C; Gordon, Sarah H; Gullett, Lindy

    2014-07-01

    Integrating theory on close relationships and intergroup relations, we construct a manipulation of similarity that we demonstrate can improve interracial interactions across different settings. We find that manipulating perceptions of similarity on self-revealing attributes that are peripheral to the interaction improves interactions in cross-race dyads and racially diverse task groups. In a getting-acquainted context, we demonstrate that the belief that one's different-race partner is similar to oneself on self-revealing, peripheral attributes leads to less anticipatory anxiety than the belief that one's partner is similar on peripheral, nonself-revealing attributes. In another dyadic context, we explore the range of benefits that perceptions of peripheral, self-revealing similarity can bring to different-race interaction partners and find (a) less anxiety during interaction, (b) greater interest in sustained contact with one's partner, and (c) stronger accuracy in perceptions of one's partners' relationship intentions. By contrast, participants in same-race interactions were largely unaffected by these manipulations of perceived similarity. Our final experiment shows that among small task groups composed of racially diverse individuals, those whose members perceive peripheral, self-revealing similarity perform superior to those who perceive dissimilarity. Implications for using this approach to improve interracial interactions across different goal-driven contexts are discussed.

  13. Predictive analytics technology review: Similarity-based modeling and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, James; Doan, Don; Gandhi, Devang; Nieman, Bill

    2010-09-15

    Over 11 years ago, SmartSignal introduced Predictive Analytics for eliminating equipment failures, using its patented SBM technology. SmartSignal continues to lead and dominate the market and, in 2010, went one step further and introduced Predictive Diagnostics. Now, SmartSignal is combining Predictive Diagnostics with RCM methodology and industry expertise. FMEA logic reengineers maintenance work management, eliminates unneeded inspections, and focuses efforts on the real issues. This integrated solution significantly lowers maintenance costs, protects against critical asset failures, and improves commercial availability, and reduces work orders 20-40%. Learn how.

  14. Lower- Versus Higher-Income Populations In The Alternative Quality Contract: Improved Quality And Similar Spending.

    Science.gov (United States)

    Song, Zirui; Rose, Sherri; Chernew, Michael E; Safran, Dana Gelb

    2017-01-01

    As population-based payment models become increasingly common, it is crucial to understand how such payment models affect health disparities. We evaluated health care quality and spending among enrollees in areas with lower versus higher socioeconomic status in Massachusetts before and after providers entered into the Alternative Quality Contract, a two-sided population-based payment model with substantial incentives tied to quality. We compared changes in process measures, outcome measures, and spending between enrollees in areas with lower and higher socioeconomic status from 2006 to 2012 (outcome measures were measured after the intervention only). Quality improved for all enrollees in the Alternative Quality Contract after their provider organizations entered the contract. Process measures improved 1.2 percentage points per year more among enrollees in areas with lower socioeconomic status than among those in areas with higher socioeconomic status. Outcome measure improvement was no different between the subgroups; neither were changes in spending. Larger or comparable improvements in quality among enrollees in areas with lower socioeconomic status suggest a potential narrowing of disparities. Strong pay-for-performance incentives within a population-based payment model could encourage providers to focus on improving quality for more disadvantaged populations.

  15. Improving reflectance reconstruction from tristimulus values by adaptively combining colorimetric and reflectance similarities

    Science.gov (United States)

    Cao, Bin; Liao, Ningfang; Li, Yasheng; Cheng, Haobo

    2017-05-01

    The use of spectral reflectance as fundamental color information finds application in diverse fields related to imaging. Many approaches use training sets to train the algorithm used for color classification. In this context, we note that the modification of training sets obviously impacts the accuracy of reflectance reconstruction based on classical reflectance reconstruction methods. Different modifying criteria are not always consistent with each other, since they have different emphases; spectral reflectance similarity focuses on the deviation of reconstructed reflectance, whereas colorimetric similarity emphasizes human perception. We present a method to improve the accuracy of the reconstructed spectral reflectance by adaptively combining colorimetric and spectral reflectance similarities. The different exponential factors of the weighting coefficients were investigated. The spectral reflectance reconstructed by the proposed method exhibits considerable improvements in terms of the root-mean-square error and goodness-of-fit coefficient of the spectral reflectance errors as well as color differences under different illuminants. Our method is applicable to diverse areas such as textiles, printing, art, and other industries.

  16. Similarities between obesity in pets and children: the addiction model.

    Science.gov (United States)

    Pretlow, Robert A; Corbee, Ronald J

    2016-09-01

    Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest short-term but poor long-term results. New strategies are needed. A novel theory posits that obesity in pets and children is due to 'treats' and excessive meal amounts given by the 'pet-parent' and child-parent to obtain affection from the pet/child, which enables 'eating addiction' in the pet/child and results in parental 'co-dependence'. Pet-parents and child-parents may even become hostage to the treats/food to avoid the ire of the pet/child. Eating addiction in the pet/child also may be brought about by emotional factors such as stress, independent of parental co-dependence. An applicable treatment for child obesity has been trialled using classic addiction withdrawal/abstinence techniques, as well as behavioural addiction methods, with significant results. Both the child and the parent progress through withdrawal from specific 'problem foods', next from snacking (non-specific foods) and finally from excessive portions at meals (gradual reductions). This approach should adapt well for pets and pet-parents. Pet obesity is more 'pure' than child obesity, in that contributing factors and treatment points are essentially under the control of the pet-parent. Pet obesity might thus serve as an ideal test bed for the treatment and prevention of child obesity, with focus primarily on parental behaviours. Sharing information between the fields of pet and child obesity would be mutually beneficial.

  17. Content-based similarity for 3D model retrieval and classification

    Institute of Scientific and Technical Information of China (English)

    Ke Lü; Ning He; Jian Xue

    2009-01-01

    With the rapid development of 3D digital shape information,content-based 3D model retrieval and classification has become an important research area.This paper presents a novel 3D model retrieval and classification algorithm.For feature representation,a method combining a distance histogram and moment invariants is proposed to improve the retrieval performance.The major advantage of using a distance histogram is its invariance to the transforms of scaling,translation and rotation.Based on the premise that two similar objects should have high mutual information,the querying of 3D data should convey a great deal of information on the shape of the two objects,and so we propose a mutual information distance measurement to perform the similarity comparison of 3D objects.The proposed algorithm is tested with a 3D model retrieval and classification prototype,and the experimental evaluation demonstrates satisfactory retrieval results and classification accuracy.

  18. Hang cleans and hang snatches produce similar improvements in female collegiate athletes.

    Science.gov (United States)

    Ayers, J L; DeBeliso, M; Sevene, T G; Adams, K J

    2016-09-01

    Olympic weightlifting movements and their variations are believed to be among the most effective ways to improve power, strength, and speed in athletes. This study investigated the effects of two Olympic weightlifting variations (hang cleans and hang snatches), on power (vertical jump height), strength (1RM back squat), and speed (40-yard sprint) in female collegiate athletes. 23 NCAA Division I female athletes were randomly assigned to either a hang clean group or hang snatch group. Athletes participated in two workout sessions a week for six weeks, performing either hang cleans or hang snatches for five sets of three repetitions with a load of 80-85% 1RM, concurrent with their existing, season-specific, resistance training program. Vertical jump height, 1RM back squat, and 40-yard sprint all had a significant, positive improvement from pre-training to post-training in both groups (p≤0.01). However, when comparing the gain scores between groups, there was no significant difference between the hang clean and hang snatch groups for any of the three dependent variables (i.e., vertical jump height, p=0.46; 1RM back squat, p=0.20; and 40-yard sprint, p=0.46). Short-term training emphasizing hang cleans or hang snatches produced similar improvements in power, strength, and speed in female collegiate athletes. This provides strength and conditioning professionals with two viable programmatic options in athletic-based exercises to improve power, strength, and speed.

  19. Feasibility of similarity coefficient map for improving morphological evaluation of weighted MRI for renal cancer

    Institute of Scientific and Technical Information of China (English)

    Wang Hao-Yu; Hu Jiani; Xie Yao-Qin; Chen Jie; Yu Amy; Wei Xin-Hua; Dai Yong-Ming

    2013-01-01

    The purpose of this paper is to investigate the feasibility of using a similarity coefficient map (SCM) in improving the morphological evaluation of T2* weighted (T2*W) magnatic resonance imaging (MRI) for renal cancer.Simulation studies and in vivo 12-echo T2*W experiments for renal cancers were performed for this purpose.The results of the first simulation study suggest that an SCM can reveal small structures which are hard to distinguish from the background tissue in T2*W images and the corresponding T2* map.The capability of improving the morphological evaluation is likely due to the improvement in the signal-to-noise ratio (SNR) and the carrier-to-noise ratio (CNR) by using the SCM technique.Compared with T2*W images,an SCM can improve the SNR by a factor ranging from 1.87 to 2.47.Compared with T2* maps,an SCM can improve the SNR by a factor ranging from 3.85 to 33.31.Compared with T2*W images,an SCM can improve the CNR by a factor ranging from 2.09 to 2.43.Compared with T2* maps,an SCM can improve the CNR by a factor ranging from 1.94 to 8.14.For a given noise level,the improvements of the SNR and the CNR depend mainly on the original SNRs and CNRs in T2*W images,respectively.In vivo experiments confirmed the results of the first simulation study.The results of the second simulation study suggest that more echoes are used to generate the SCM,and higher SNRs and CNRs can be achieved in SCMs.In conclusion,an SCM can provide improved morphological evaluation of T2*W MR images for renal cancer by unveiling fine structures which are ambiguous or invisible in the corresponding T2*W MR images and T2* maps.Furthermore,in practical applications,for a fixed total sampling time,one should increase the number of echoes as much as possible to achieve SCMs with better SNRs and CNRs.

  20. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    Science.gov (United States)

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    data, wavelet transform and term frequency-inverse document frequency methods were employed to extract predictors. Selecting predictors with potential to highlight special cases and defining new patient similarity metrics were among the gaps identified in the existing literature that provide starting points for future work. Patient status prediction models based on patient similarity and health data offer exciting potential for personalizing and ultimately improving health care, leading to better patient outcomes. PMID:28258046

  1. Improving the In-Medium Similarity Renormalization Group via approximate inclusion of three-body effects

    Science.gov (United States)

    Morris, Titus; Bogner, Scott

    2016-09-01

    The In-Medium Similarity Renormalization Group (IM-SRG) has been applied successfully to the ground state of closed shell finite nuclei. Recent work has extended its ability to target excited states of these closed shell systems via equation of motion methods, and also complete spectra of the whole SD shell via effective shell model interactions. A recent alternative method for solving of the IM-SRG equations, based on the Magnus expansion, not only provides a computationally feasible route to producing observables, but also allows for approximate handling of induced three-body forces. Promising results for several systems, including finite nuclei, will be presented and discussed.

  2. Clinical Interdisciplinary Collaboration Models and Frameworks From Similarities to Differences: A Systematic Review

    Science.gov (United States)

    Mahdizadeh, Mousa; Heydari, Abbas; Moonaghi, Hossien Karimi

    2015-01-01

    Introduction: So far, various models of interdisciplinary collaboration in clinical nursing have been presented, however, yet a comprehensive model is not available. The purpose of this study is to review the evidences that had presented model or framework with qualitative approach about interdisciplinary collaboration in clinical nursing. Methods: All the articles and theses published from 1990 to 10 June 2014 which in both English and Persian models or frameworks of clinicians had presented model or framework of clinical collaboration were searched using databases of Proquest, Scopus, pub Med, Science Direct, and Iranian databases of Sid, Magiran, and Iranmedex. In this review, for published articles and theses, keywords according with MESH such as nurse-physician relations, care team, collaboration, interdisciplinary relations and their Persian equivalents were used. Results: In this study contexts, processes and outcomes of interdisciplinary collaboration as findings were extracted. One of the major components affecting on collaboration that most of the models had emphasized was background of collaboration. Most of studies suggested that the outcome of collaboration were improved care, doctors and nurses’ satisfaction, controlling costs, reducing clinical errors and patient’s safety. Conclusion: Models and frameworks had different structures, backgrounds, and conditions, but the outcomes were similar. Organizational structure, culture and social factors are important aspects of clinical collaboration. So it is necessary to improve the quality and effectiveness of clinical collaboration these factors to be considered. PMID:26153158

  3. Image Denoising via Bandwise Adaptive Modeling and Regularization Exploiting Nonlocal Similarity.

    Science.gov (United States)

    Xiong, Ruiqin; Liu, Hangfan; Zhang, Xinfeng; Zhang, Jian; Ma, Siwei; Wu, Feng; Gao, Wen

    2016-09-27

    This paper proposes a new image denoising algorithm based on adaptive signal modeling and regularization. It improves the quality of images by regularizing each image patch using bandwise distribution modeling in transform domain. Instead of using a global model for all the patches in an image, it employs content-dependent adaptive models to address the non-stationarity of image signals and also the diversity among different transform bands. The distribution model is adaptively estimated for each patch individually. It varies from one patch location to another and also varies for different bands. In particular, we consider the estimated distribution to have non-zero expectation. To estimate the expectation and variance parameters for every band of a particular patch, we exploit the nonlocal correlation in image to collect a set of highly similar patches as the data samples to form the distribution. Irrelevant patches are excluded so that such adaptively-learned model is more accurate than a global one. The image is ultimately restored via bandwise adaptive soft-thresholding, based on a Laplacian approximation of the distribution of similar-patch group transform coefficients. Experimental results demonstrate that the proposed scheme outperforms several state-of-the-art denoising methods in both the objective and the perceptual qualities.

  4. STUDY ON SIMILARITY LAWS OF A DISTORTED RIVER MODEL WITH A MOVABLE BED

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In this study, by considering the scale ratio related to thespecific gravity of the submerged bed material,and introducing a degree of distortion, n the similarity laws for a distorted river model with a movable bed were derived under the conditions that the values of dual dimensionless parameters in a regime-criterion diagram for the bars are the same in a model as they are in a prototype, and that a resistance law such as the Manning-Strickler-type formula is to be valid for a model and a prototype. The usefulness of the similarity laws derived in this study was verified by comparing the bed forms from the distroted model experiments with the bed forms from the 1/50-scale undistorted model experiments, which were performed by the Hokkaido Development Bureau (H. D.B. ), Japan, to examine the tentative plan for the improvement of a low-flow channel in the Chubetsu River, which is a tributary of the Ishikari River. It is considered that the distorted model experiments to be valid with either sand or lightweight bed material.

  5. Self-similarity of phase-space networks of frustrated spin models and lattice gas models

    Science.gov (United States)

    Peng, Yi; Wang, Feng; Han, Yilong

    2013-03-01

    We studied the self-similar properties of the phase-spaces of two frustrated spin models and two lattice gas models. The frustrated spin models included (1) the anti-ferromagnetic Ising model on a two-dimensional triangular lattice (1a) at the ground states and (1b) above the ground states and (2) the six-vertex model. The two lattice gas models were (3) the one-dimensional lattice gas model and (4) the two-dimensional lattice gas model. The phase spaces were mapped to networks so that the fractal analysis of complex networks could be applied, i.e. the box-covering method and the cluster-growth method. These phase spaces, in turn, establish new classes of networks with unique self-similar properties. Models 1a, 2, and 3 with long-range power-law correlations in real space exhibit fractal phase spaces, while models 1b and 4 with short-range exponential correlations in real space exhibit nonfractal phase spaces. This behavior agrees with one of untested assumptions in Tsallis nonextensive statistics. Hong Kong GRC grants 601208 and 601911

  6. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure.

    Science.gov (United States)

    Zhang, Wen; Xiao, Fan; Li, Bin; Zhang, Siguang

    2016-01-01

    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods.

  7. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure

    Directory of Open Access Journals (Sweden)

    Wen Zhang

    2016-01-01

    Full Text Available Recently, LSI (Latent Semantic Indexing based on SVD (Singular Value Decomposition is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods.

  8. Short-Term Power Forecasting Model for Photovoltaic Plants Based on Historical Similarity

    Directory of Open Access Journals (Sweden)

    M. Sonia Terreros-Olarte

    2013-05-01

    Full Text Available This paper proposes a new model for short-term forecasting of electric energy production in a photovoltaic (PV plant. The model is called HIstorical SImilar MIning (HISIMI model; its final structure is optimized by using a genetic algorithm, based on data mining techniques applied to historical cases composed by past forecasted values of weather variables, obtained from numerical tools for weather prediction, and by past production of electric power in a PV plant. The HISIMI model is able to supply spot values of power forecasts, and also the uncertainty, or probabilities, associated with those spot values, providing new useful information to users with respect to traditional forecasting models for PV plants. Such probabilities enable analysis and evaluation of risk associated with those spot forecasts, for example, in offers of energy sale for electricity markets. The results of spot forecasting of an illustrative example obtained with the HISIMI model for a real-life grid-connected PV plant, which shows high intra-hour variability of its actual power output, with forecasting horizons covering the following day, have improved those obtained with other two power spot forecasting models, which are a persistence model and an artificial neural network model.

  9. An Improved Cosmological Model

    CERN Document Server

    Tsamis, N C

    2016-01-01

    We study a class of non-local, action-based, and purely gravitational models. These models seek to describe a cosmology in which inflation is driven by a large, bare cosmological constant that is screened by the self-gravitation between the soft gravitons that inflation rips from the vacuum. Inflation ends with the universe poised on the verge of gravitational collapse, in an oscillating phase of expansion and contraction that should lead to rapid reheating when matter is included. After the attainment of a hot, dense universe the nonlocal screening terms become constant as the universe evolves through a conventional phase of radiation domination. The onset of matter domination triggers a much smaller anti-screening effect that could explain the current phase of acceleration.

  10. Improved cosmological model

    Science.gov (United States)

    Tsamis, N. C.; Woodard, R. P.

    2016-08-01

    We study a class of nonlocal, action-based, and purely gravitational models. These models seek to describe a cosmology in which inflation is driven by a large, bare cosmological constant that is screened by the self-gravitation between the soft gravitons that inflation rips from the vacuum. Inflation ends with the Universe poised on the verge of gravitational collapse, in an oscillating phase of expansion and contraction that should lead to rapid reheating when matter is included. After the attainment of a hot, dense Universe the nonlocal screening terms become constant as the Universe evolves through a conventional phase of radiation domination. The onset of matter domination triggers a much smaller antiscreening effect that could explain the current phase of acceleration.

  11. TRAINING AT THE OPTIMUM POWER ZONE PRODUCES SIMILAR PERFORMANCE IMPROVEMENTS TO TRADITIONAL STRENGTH TRAINING

    Directory of Open Access Journals (Sweden)

    Irineu Loturco

    2013-03-01

    Full Text Available The purpose of this study was to test if substituting a regular maximum strength-oriented training regimen by a power-oriented one at the optimal power load in the first phase of a traditional periodization produces similar performance improvements later on into the training period. Forty five soldiers of the Brazilian brigade of special operations with at least one year of army training experience were divided into a control group (CG - n = 15, 20.18 ± 0.72 yrs, 1.74 ± 0.06 m, 66.7 ± 9.8 kg, and 1RM/weight ratio = 1.14 ± 0.12, a traditional periodization group (TG - n = 15, 20.11 ± 0.7 yrs, 1.72 ± 0.045 m, 63.1 ± 3.6 kg, and 1RM/weight ratio = 1.21 ± 0.16; and a maximum-power group (MPG - n = 15, 20.5 ± 0.6 yrs, 1.73 ± 0.049m, 67.3 ± 9.8 kg, 1RM/weight ratio = 1.20 ± 0.14. Maximum strength (26.2% and 24.6%, CMJ height (30.8% and 39.1% and sprint speed (11.6% and 14.5% increased significantly (p < 0.05 and similarly for the MPG and TG, respectively, from pre- to post-assessments. Our data suggests that a power training regimen may be used in the initial phase of the training cycle without impairing performance later on into the training period.

  12. Structural similarity based kriging for quantitative structure activity and property relationship modeling.

    Science.gov (United States)

    Teixeira, Ana L; Falcao, Andre O

    2014-07-28

    Structurally similar molecules tend to have similar properties, i.e. closer molecules in the molecular space are more likely to yield similar property values while distant molecules are more likely to yield different values. Based on this principle, we propose the use of a new method that takes into account the high dimensionality of the molecular space, predicting chemical, physical, or biological properties based on the most similar compounds with measured properties. This methodology uses ordinary kriging coupled with three different molecular similarity approaches (based on molecular descriptors, fingerprints, and atom matching) which creates an interpolation map over the molecular space that is capable of predicting properties/activities for diverse chemical data sets. The proposed method was tested in two data sets of diverse chemical compounds collected from the literature and preprocessed. One of the data sets contained dihydrofolate reductase inhibition activity data, and the second molecules for which aqueous solubility was known. The overall predictive results using kriging for both data sets comply with the results obtained in the literature using typical QSPR/QSAR approaches. However, the procedure did not involve any type of descriptor selection or even minimal information about each problem, suggesting that this approach is directly applicable to a large spectrum of problems in QSAR/QSPR. Furthermore, the predictive results improve significantly with the similarity threshold between the training and testing compounds, allowing the definition of a confidence threshold of similarity and error estimation for each case inferred. The use of kriging for interpolation over the molecular metric space is independent of the training data set size, and no reparametrizations are necessary when more compounds are added or removed from the set, and increasing the size of the database will consequentially improve the quality of the estimations. Finally it is shown

  13. Application of improved homogeneity similarity-based denoising in optical coherence tomography retinal images.

    Science.gov (United States)

    Chen, Qiang; de Sisternes, Luis; Leng, Theodore; Rubin, Daniel L

    2015-06-01

    Image denoising is a fundamental preprocessing step of image processing in many applications developed for optical coherence tomography (OCT) retinal imaging--a high-resolution modality for evaluating disease in the eye. To make a homogeneity similarity-based image denoising method more suitable for OCT image removal, we improve it by considering the noise and retinal characteristics of OCT images in two respects: (1) median filtering preprocessing is used to make the noise distribution of OCT images more suitable for patch-based methods; (2) a rectangle neighborhood and region restriction are adopted to accommodate the horizontal stretching of retinal structures when observed in OCT images. As a performance measurement of the proposed technique, we tested the method on real and synthetic noisy retinal OCT images and compared the results with other well-known spatial denoising methods, including bilateral filtering, five partial differential equation (PDE)-based methods, and three patch-based methods. Our results indicate that our proposed method seems suitable for retinal OCT imaging denoising, and that, in general, patch-based methods can achieve better visual denoising results than point-based methods in this type of imaging, because the image patch can better represent the structured information in the images than a single pixel. However, the time complexity of the patch-based methods is substantially higher than that of the others.

  14. Content-Based Search on a Database of Geometric Models: Identifying Objects of Similar Shape

    Energy Technology Data Exchange (ETDEWEB)

    XAVIER, PATRICK G.; HENRY, TYSON R.; LAFARGE, ROBERT A.; MEIRANS, LILITA; RAY, LAWRENCE P.

    2001-11-01

    The Geometric Search Engine is a software system for storing and searching a database of geometric models. The database maybe searched for modeled objects similar in shape to a target model supplied by the user. The database models are generally from CAD models while the target model may be either a CAD model or a model generated from range data collected from a physical object. This document describes key generation, database layout, and search of the database.

  15. A New Retrieval Model Based on TextTiling for Document Similarity Search

    Institute of Scientific and Technical Information of China (English)

    Xiao-Jun Wan; Yu-Xin Peng

    2005-01-01

    Document similarity search is to find documents similar to a given query document and return a ranked list of similar documents to users, which is widely used in many text and web systems, such as digital library, search engine,etc. Traditional retrieval models, including the Okapi's BM25 model and the Smart's vector space model with length normalization, could handle this problem to some extent by taking the query document as a long query. In practice,the Cosine measure is considered as the best model for document similarity search because of its good ability to measure similarity between two documents. In this paper, the quantitative performances of the above models are compared using experiments. Because the Cosine measure is not able to reflect the structural similarity between documents, a new retrieval model based on TextTiling is proposed in the paper. The proposed model takes into account the subtopic structures of documents. It first splits the documents into text segments with TextTiling and calculates the similarities for different pairs of text segments in the documents. Lastly the overall similarity between the documents is returned by combining the similarities of different pairs of text segments with optimal matching method. Experiments are performed and results show:1) the popular retrieval models (the Okapi's BM25 model and the Smart's vector space model with length normalization)do not perform well for document similarity search; 2) the proposed model based on TextTiling is effective and outperforms other models, including the Cosine measure; 3) the methods for the three components in the proposed model are validated to be appropriately employed.

  16. An improved method for functional similarity analysis of genes based on Gene Ontology.

    Science.gov (United States)

    Tian, Zhen; Wang, Chunyu; Guo, Maozu; Liu, Xiaoyan; Teng, Zhixia

    2016-12-23

    Measures of gene functional similarity are essential tools for gene clustering, gene function prediction, evaluation of protein-protein interaction, disease gene prioritization and other applications. In recent years, many gene functional similarity methods have been proposed based on the semantic similarity of GO terms. However, these leading approaches may make errorprone judgments especially when they measure the specificity of GO terms as well as the IC of a term set. Therefore, how to estimate the gene functional similarity reliably is still a challenging problem. We propose WIS, an effective method to measure the gene functional similarity. First of all, WIS computes the IC of a term by employing its depth, the number of its ancestors as well as the topology of its descendants in the GO graph. Secondly, WIS calculates the IC of a term set by means of considering the weighted inherited semantics of terms. Finally, WIS estimates the gene functional similarity based on the IC overlap ratio of term sets. WIS is superior to some other representative measures on the experiments of functional classification of genes in a biological pathway, collaborative evaluation of GO-based semantic similarity measures, protein-protein interaction prediction and correlation with gene expression. Further analysis suggests that WIS takes fully into account the specificity of terms and the weighted inherited semantics of terms between GO terms. The proposed WIS method is an effective and reliable way to compare gene function. The web service of WIS is freely available at http://nclab.hit.edu.cn/WIS/ .

  17. Modelling climate change responses in tropical forests: similar productivity estimates across five models, but different mechanisms and responses

    Directory of Open Access Journals (Sweden)

    L. Rowland

    2014-11-01

    Full Text Available Accurately predicting the response of Amazonia to climate change is important for predicting changes across the globe. However, changes in multiple climatic factors simultaneously may result in complex non-linear responses, which are difficult to predict using vegetation models. Using leaf and canopy scale observations, this study evaluated the capability of five vegetation models (CLM3.5, ED2, JULES, SiB3, and SPA to simulate the responses of canopy and leaf scale productivity to changes in temperature and drought in an Amazonian forest. The models did not agree as to whether gross primary productivity (GPP was more sensitive to changes in temperature or precipitation. There was greater model–data consistency in the response of net ecosystem exchange to changes in temperature, than in the response to temperature of leaf area index (LAI, net photosynthesis (An and stomatal conductance (gs. Modelled canopy scale fluxes are calculated by scaling leaf scale fluxes to LAI, and therefore in this study similarities in modelled ecosystem scale responses to drought and temperature were the result of inconsistent leaf scale and LAI responses among models. Across the models, the response of An to temperature was more closely linked to stomatal behaviour than biochemical processes. Consequently all the models predicted that GPP would be higher if tropical forests were 5 °C colder, closer to the model optima for gs. There was however no model consistency in the response of the An–gs relationship when temperature changes and drought were introduced simultaneously. The inconsistencies in the An–gs relationships amongst models were caused by to non-linear model responses induced by simultaneous drought and temperature change. To improve the reliability of simulations of the response of Amazonian rainforest to climate change the mechanistic underpinnings of vegetation models need more complete validation to improve accuracy and consistency in the scaling

  18. Effectively integrating information content and structural relationship to improve the GO-based similarity measure between proteins

    CERN Document Server

    Li, Bo; Feltus, F Alex; Zhou, Jizhong; Luo, Feng

    2010-01-01

    The Gene Ontology (GO) provides a knowledge base to effectively describe proteins. However, measuring similarity between proteins based on GO remains a challenge. In this paper, we propose a new similarity measure, information coefficient similarity measure (SimIC), to effectively integrate both the information content (IC) of GO terms and the structural information of GO hierarchy to determine the similarity between proteins. Testing on yeast proteins, our results show that SimIC efficiently addresses the shallow annotation issue in GO, thus improves the correlations between GO similarities of yeast proteins and their expression similarities as well as between GO similarities of yeast proteins and their sequence similarities. Furthermore, we demonstrate that the proposed SimIC is superior in predicting yeast protein interactions. We predict 20484 yeast protein-protein interactions (PPIs) between 2462 proteins based on the high SimIC values of biological process (BP) and cellular component (CC). Examining the...

  19. Embedding Term Similarity and Inverse Document Frequency into a Logical Model of Information Retrieval.

    Science.gov (United States)

    Losada, David E.; Barreiro, Alvaro

    2003-01-01

    Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

  20. A new k-epsilon model consistent with Monin-Obukhov similarity theory

    DEFF Research Database (Denmark)

    van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.

    2016-01-01

    A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...

  1. Assessing Suitability of Rural Settlements Using an Improved Technique for Order Preference by Similarity to Ideal Solution

    Institute of Scientific and Technical Information of China (English)

    LIU Yanfang; CUI Jiaxing; KONG Xuesong; ZENG Chen

    2016-01-01

    Land suitability assessment is a prerequisite phase in land use planning;it guides toward optimal land use by providing information on the opportunities and constraints involved in the use of a given land area.A geographic information system-based procedure,known as rural settlement suitability evaluation (RSSE) using an improved technique for order preference by similarity to ideal solution (TOPSIS),was adopted to determine the most suitable area for constructing rural settlements in different geographical locations.Given the distribution and independence of rural settlements,a distinctive evaluation criteria system that differed from that of urban suitability was established by considering the level of rural infrastructure services as well as living and working conditions.The unpredictable mutual interference among evaluation factors has been found in practical works.An improved TOPSIS using Mahalanobis distance was applied to solve the unpredictable correlation among the criteria in a suitability evaluation.Uncertainty and sensitivity analyses obtained via Monte Carlo simulation were performed to examine the robustness of the model.Daye,a resource-based city with rapid economic development,unsatisfied rural development,and geological environmental problems caused by mining,was used as a case study.Results indicate the following findings:1) The RSSE model using the improved TOPSIS can assess the suitability of rural settlements,and the suitability maps generated using the improved TOPSIS have higher information density than those generated using traditional TOPSIS.The robustness of the model is improved,and the uncertainty is reduced in the suitability results.2) Highly suitable land is mainly distributed in the northeast of the study area,and the majority of which is cultivated land,thereby leading to tremendous pressure on the loss of cultivated land.3) Lastly,12.54% of the constructive expansion permitted zone and 8.36% of the constructive expansion

  2. A technical study and analysis on fuzzy similarity based models for text classification

    CERN Document Server

    Puri, Shalini; 10.5121/ijdkp.2012.2201

    2012-01-01

    In this new and current era of technology, advancements and techniques, efficient and effective text document classification is becoming a challenging and highly required area to capably categorize text documents into mutually exclusive categories. Fuzzy similarity provides a way to find the similarity of features among various documents. In this paper, a technical review on various fuzzy similarity based models is given. These models are discussed and compared to frame out their use and necessity. A tour of different methodologies is provided which is based upon fuzzy similarity related concerns. It shows that how text and web documents are categorized efficiently into different categories. Various experimental results of these models are also discussed. The technical comparisons among each model's parameters are shown in the form of a 3-D chart. Such study and technical review provide a strong base of research work done on fuzzy similarity based text document categorization.

  3. Active contours extension and similarity indicators for improved 3D segmentation of thyroid ultrasound images

    Science.gov (United States)

    Poudel, P.; Illanes, A.; Arens, C.; Hansen, C.; Friebe, M.

    2017-03-01

    Thyroid segmentation in tracked 2D ultrasound (US) using active contours has a low segmentation accuracy mainly due to the fact that smaller structures cannot be efficiently recognized and segmented. To address this issue, we propose a new similarity indicator with the main objective to provide information to the active contour algorithm concerning the regions that the active contour should continue to expand or should stop. First, a preprocessing step is carried out in order to attenuate the noise present in the US image and to increase its contrast, using histogram equalization and a median filter. In the second step, active contours are used to segment the thyroid in each 2D image of the dataset. After performing a first segmentation, two similarity indicators (ratio of mean square error, MSE and correlation between histograms) are computed at each contour point of the initial segmented thyroid between rectangles located inside and outside the obtained contour. A threshold is used on a final indicator computed from the other two indicators to find the probable regions for further segmentation using active contours. This process is repeated until no new segmentation region is identified. Finally, all the segmented thyroid images passed through a 3D reconstruction algorithm to obtain a 3D volume segmented thyroid. The results showed that including similarity indicators based on histogram equalization and MSE between inside and outside regions of the contour can help to segment difficult areas that active contours have problem to segment.

  4. My Understanding of the Main Similarities and Differences between the Three Translation Models

    Institute of Scientific and Technical Information of China (English)

    支志

    2009-01-01

    In this paper,the author wants to prove that the three translation models not only have similarities but also have differences,with the similarities being that they all refer to faithful and free translation and the status of reader,the differences being that their focuses are quite different and their influence upon the present translation theory and practice vary.

  5. Numerical verification of similar Cam-clay model based on generalized potential theory

    Institute of Scientific and Technical Information of China (English)

    钟志辉; 杨光华; 傅旭东; 温勇; 张玉成

    2014-01-01

    From the mathematical principles, the generalized potential theory can be employed to create constitutive model of geomaterial directly. The similar Cam-clay model, which is created based on the generalized potential theory, has less assumptions, clearer mathematical basis, and better computational accuracy. Theoretically, it is more scientific than the traditional Cam-clay models. The particle flow code PFC3D was used to make numerical tests to verify the rationality and practicality of the similar Cam-clay model. The verification process was as follows: 1) creating the soil sample for numerical test in PFC3D, and then simulating the conventional triaxial compression test, isotropic compression test, and isotropic unloading test by PFC3D; 2) determining the parameters of the similar Cam-clay model from the results of above tests; 3) predicting the sample’s behavior in triaxial tests under different stress paths by the similar Cam-clay model, and comparing the predicting results with predictions by the Cam-clay model and the modified Cam-clay model. The analysis results show that the similar Cam-clay model has relatively high prediction accuracy, as well as good practical value.

  6. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert

    2015-02-19

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  7. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    Science.gov (United States)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  8. Applying Statistical Models and Parametric Distance Measures for Music Similarity Search

    Science.gov (United States)

    Lukashevich, Hanna; Dittmar, Christian; Bastuck, Christoph

    Automatic deriving of similarity relations between music pieces is an inherent field of music information retrieval research. Due to the nearly unrestricted amount of musical data, the real-world similarity search algorithms have to be highly efficient and scalable. The possible solution is to represent each music excerpt with a statistical model (ex. Gaussian mixture model) and thus to reduce the computational costs by applying the parametric distance measures between the models. In this paper we discuss the combinations of applying different parametric modelling techniques and distance measures and weigh the benefits of each one against the others.

  9. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    Science.gov (United States)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  10. On Measuring Process Model Similarity Based on High-Level Change Operations

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, A.

    2008-01-01

    For various applications there is the need to compare the similarity between two process models. For example, given the as-is and to-be models of a particular business process, we would like to know how much they differ from each other and how we can efficiently transform the as-is to the to-be mode

  11. On Measuring Process Model Similarity based on High-level Change Operations

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, A.

    2007-01-01

    For various applications there is the need to compare the similarity between two process models. For example, given the as-is and to-be models of a particular business process, we would like to know how much they differ from each other and how we can efficiently transform the as-is to the to-be mode

  12. Improvements of evaporation drag model

    Institute of Scientific and Technical Information of China (English)

    LI Xiao-Yan; XU Ji-Jun

    2004-01-01

    A special visible experiment facility has been designed and built, and an observable experiment is performed by pouring one or several high-temperature particles into a water pool in the facility. The experiment result has verified Yang's evaporation drag model, which holds that the non-symmetric profile of the local evaporation rate and the local density of vapor would bring about a resultant force on the hot particle so as to resist its motion. However, in Yang's evaporation drag model, radiation heat transfer is taken as the only way to transfer heat from hot particle to the vapor-liquid interface, and all of the radiation energy is deposited on the vapor-liquid interface and contributed to the vaporization rate and mass balance of the vapor film. In improved model heat conduction and heat convection are taken into account. This paper presents calculations of the improved model, putting emphasis on the effect of hot particle's temperature on the radiation absorption behavior of water.

  13. An Exactly Soluble Hierarchical Clustering Model Inverse Cascades, Self-Similarity, and Scaling

    CERN Document Server

    Gabrielov, A; Turcotte, D L

    1999-01-01

    We show how clustering as a general hierarchical dynamical process proceeds via a sequence of inverse cascades to produce self-similar scaling, as an intermediate asymptotic, which then truncates at the largest spatial scales. We show how this model can provide a general explanation for the behavior of several models that has been described as ``self-organized critical,'' including forest-fire, sandpile, and slider-block models.

  14. Improvements in ECN Wake Model

    Energy Technology Data Exchange (ETDEWEB)

    Versteeg, M.C. [University of Twente, Enschede (Netherlands); Ozdemir, H.; Brand, A.J. [ECN Wind Energy, Petten (Netherlands)

    2013-08-15

    Wind turbines extract energy from the flow field so that the flow in the wake of a wind turbine contains less energy and more turbulence than the undisturbed flow, leading to less energy extraction for the downstream turbines. In large wind farms, most turbines are located in the wake of one or more turbines causing the flow characteristics felt by these turbines differ considerably from the free stream flow conditions. The most important wake effect is generally considered to be the lower wind speed behind the turbine(s) since this decreases the energy production and as such the economical performance of a wind farm. The overall loss of a wind farm is very much dependent on the conditions and the lay-out of the farm but it can be in the order of 5-10%. Apart from the loss in energy production an additional wake effect is formed by the increase in turbulence intensity, which leads to higher fatigue loads. In this sense it becomes important to understand the details of wake behavior to improve and/or optimize a wind farm layout. Within this study improvements are presented for the existing ECN wake model which constructs the fundamental basis of ECN's FarmFlow wind farm wake simulation tool. The outline of this paper is as follows: first, the governing equations of the ECN wake farm model are presented. Then the near wake modeling is discussed and the results compared with the original near wake modeling and EWTW (ECN Wind Turbine Test Site Wieringermeer) data as well as the results obtained for various near wake implementation cases are shown. The details of the atmospheric stability model are given and the comparison with the solution obtained for the original surface layer model and with the available data obtained by EWTW measurements are presented. Finally the conclusions are summarized.

  15. On two-layer models and the similarity functions for the PBL

    Science.gov (United States)

    Brown, R. A.

    1982-01-01

    An operational Planetary Boundary Layer model which employs similarity principles and two-layer patching to provide state-of-the-art parameterization for the PBL flow is used to study the popularly used similarity functions, A and B. The expected trends with stratification are shown. The effects of baroclinicity, secondary flow, humidity, latitude, surface roughness variation and choice of characteristic height scale are discussed.

  16. Uncovering highly obfuscated plagiarism cases using fuzzy semantic-based similarity model

    OpenAIRE

    Salha M. Alzahrani; Naomie Salim; Vasile Palade

    2015-01-01

    Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS) tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and su...

  17. LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation.

    Science.gov (United States)

    Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl

    2015-08-01

    Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average

  18. Genome-wide expression profiling of five mouse models identifies similarities and differences with human psoriasis.

    Directory of Open Access Journals (Sweden)

    William R Swindell

    Full Text Available Development of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features of the human disease, and standardized validation criteria for psoriasis mouse models have not been widely applied. In this study, whole-genome transcriptional profiling is used to compare gene expression patterns manifested by human psoriatic skin lesions with those that occur in five psoriasis mouse models (K5-Tie2, imiquimod, K14-AREG, K5-Stat3C and K5-TGFbeta1. While the cutaneous gene expression profiles associated with each mouse phenotype exhibited statistically significant similarity to the expression profile of psoriasis in humans, each model displayed distinctive sets of similarities and differences in comparison to human psoriasis. For all five models, correspondence to the human disease was strong with respect to genes involved in epidermal development and keratinization. Immune and inflammation-associated gene expression, in contrast, was more variable between models as compared to the human disease. These findings support the value of all five models as research tools, each with identifiable areas of convergence to and divergence from the human disease. Additionally, the approach used in this paper provides an objective and quantitative method for evaluation of proposed mouse models of psoriasis, which can be strategically applied in future studies to score strengths of mouse phenotypes relative to specific aspects of human psoriasis.

  19. A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure

    Directory of Open Access Journals (Sweden)

    Yang Zhou

    2016-01-01

    Full Text Available It is an important content to generate visual place cells (VPCs in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs’ generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs’ firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF and firing rate’s threshold (FRT.

  20. Similar extrusion and mapping optimization of die cavity modeling for special-shaped products

    Institute of Scientific and Technical Information of China (English)

    QI Hong-yuan; WANG Shuang-xin; ZHU Heng-jun

    2006-01-01

    Aimed at the modeling issues in design and quick processing of extruding die for special-shaped products, with the help of Conformal Mapping theory, Conformal Mapping function is determined by the given method of numerical trigonometric interpolation. Three-dimensional forming problems are transformed into two-dimensional problems, and mathematical model of die cavity surface is established based on different kinds of vertical curve, as well as the mathematical model of plastic flow in extruding deformation of special-shaped products gets completed. By upper bound method, both vertical curves of die cavity and its parameters are optimized. Combining the optimized model with the latest NC technology, NC Program of die cavity and its CAM can be realized. Taking the similar extrusion of square-shaped products with arc radius as instance, both metal plastic similar extrusion and die cavity optimization are carried out.

  1. Experimental Study of Dowel Bar Alternatives Based on Similarity Model Test

    Directory of Open Access Journals (Sweden)

    Chichun Hu

    2017-01-01

    Full Text Available In this study, a small-scaled accelerated loading test based on similarity theory and Accelerated Pavement Analyzer was developed to evaluate dowel bars with different materials and cross-sections. Jointed concrete specimen consisting of one dowel was designed as scaled model for the test, and each specimen was subjected to 864 thousand loading cycles. Deflections between jointed slabs were measured with dial indicators, and strains of the dowel bars were monitored with strain gauges. The load transfer efficiency, differential deflection, and dowel-concrete bearing stress for each case were calculated from these measurements. The test results indicated that the effect of the dowel modulus on load transfer efficiency can be characterized based on the similarity model test developed in the study. Moreover, round steel dowel was found to have similar performance to larger FRP dowel, and elliptical dowel can be preferentially considered in practice.

  2. Bianchi VI{sub 0} and III models: self-similar approach

    Energy Technology Data Exchange (ETDEWEB)

    Belinchon, Jose Antonio, E-mail: abelcal@ciccp.e [Departamento de Fisica, ETS Arquitectura, UPM, Av. Juan de Herrera 4, Madrid 28040 (Spain)

    2009-09-07

    We study several cosmological models with Bianchi VI{sub 0} and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and LAMBDA. As in other studied models we find that the behaviour of G and LAMBDA are related. If G behaves as a growing time function then LAMBDA is a positive decreasing time function but if G is decreasing then LAMBDA{sub 0} is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  3. Numerical model of a non-steady atmospheric planetary boundary layer, based on similarity theory

    DEFF Research Database (Denmark)

    Zilitinkevich, S.S.; Fedorovich, E.E.; Shabalova, M.V.

    1992-01-01

    A numerical model of a non-stationary atmospheric planetary boundary layer (PBL) over a horizontally homogeneous flat surface is derived on the basis of similarity theory. The two most typical turbulence regimes are reproduced: one corresponding to a convectively growing PBL and another correspon...

  4. Research on Kalman Filtering Algorithm for Deformation Information Series of Similar Single-Difference Model

    Institute of Scientific and Technical Information of China (English)

    L(U) Wei-cai; XU Shao-quan

    2004-01-01

    Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.

  5. Riccati-coupled similarity shock wave solutions for multispeed discrete Boltzmann models

    Energy Technology Data Exchange (ETDEWEB)

    Cornille, H. (Service de Physique Theorique, Gif-sur-Yvette (France)); Platkowski, T. (Warsaw Univ. (Poland))

    1993-05-01

    The authors study nonstandard shock wave similarity solutions for three multispeed discrete boltzmann models: (1) the square 8[upsilon][sub i] model with speeds 1 and [radical]2 with the x axis along one median, (2) the Cabannes cubic 14[upsilon][sub i] model with speeds 1 and [radical]3 and the x axis perpendicular to one face, and (3) another 14[upsilon][sub i] model with speeds 1 and [radical]2. These models have five independent densities and two nonlinear Riccati-coupled equations. The standard similarity shock waves, solutions of scalar Riccati equations, are monotonic and the same behavior holds for the conservative macroscopic quantities. First, the exact similarity shock-wave solutions of coupled Riccati equations are determined and the nonmonotonic behavior for one density and a smaller effect for one conservative macroscopic quantity are observed when a violation of the microreversibility is allowed. Second, new results are obtained on the Whitham weak shock wave propagation. Third, the corresponding dynamical system is numerically solved, with microreversibility satisfied or not, and the analogous nonmonotonic behavior is observed. 9 refs., 2 figs., 1 tab.

  6. Improving land surface models with FLUXNET data

    Directory of Open Access Journals (Sweden)

    Y. -P. Wang

    2009-07-01

    Full Text Available There is a growing consensus that land surface models (LSMs that simulate terrestrial biosphere exchanges of matter and energy must be better constrained with data to quantify and address their uncertainties. FLUXNET, an international network of sites that measure the land surface exchanges of carbon, water and energy using the eddy covariance technique, is a prime source of data for model improvement. Here we outline a multi-stage process for "fusing" (i.e. linking LSMs with FLUXNET data to generate better models with quantifiable uncertainty. First, we describe FLUXNET data availability, and its random and systematic biases. We then introduce methods for assessing LSM model runs against FLUXNET observations in temporal and spatial domains. These assessments are a prelude to more formal model-data fusion (MDF. MDF links model to data, based on error weightings. In theory, MDF produces optimal analyses of the modelled system, but there are practical problems. We first discuss how to set model errors and initial conditions. In both cases incorrect assumptions will affect the outcome of the MDF. We then review the problem of equifinality, whereby multiple combinations of parameters can produce similar model output. Fusing multiple independent and orthogonal data provides a means to limit equifinality. We then show how parameter probability density functions (PDFs from MDF can be used to interpret model validity, and to propagate errors into model outputs. Posterior parameter distributions are a useful way to assess the success of MDF, combined with a determination of whether model residuals are Gaussian. If the MDF scheme provides evidence for temporal variation in parameters, then that is indicative of a critical missing dynamic process. A comparison of parameter PDFs generated with the same model from multiple FLUXNET sites can provide insights into the concept and validity of plant functional types (PFT – we would expect similar parameter

  7. Improving land surface models with FLUXNET data

    Directory of Open Access Journals (Sweden)

    M. Williams

    2009-03-01

    Full Text Available There is a growing consensus that land surface models (LSMs that simulate terrestrial biosphere exchanges of matter and energy must be better constrained with data to quantify and address their uncertainties. FLUXNET, an international network of sites that measure the land surface exchanges of carbon, water and energy using the eddy covariance technique, is a prime source of data for model improvement. Here we outline a multi-stage process for fusing LSMs with FLUXNET data to generate better models with quantifiable uncertainty. First, we describe FLUXNET data availability, and its random and systematic biases. We then introduce methods for assessing LSM model runs against FLUXNET observations in temporal and spatial domains. These assessments are a prelude to more formal model-data fusion (MDF. MDF links model to data, based on error weightings. In theory, MDF produces optimal analyses of the modelled system, but there are practical problems. We first discuss how to set model errors and initial conditions. In both cases incorrect assumptions will affect the outcome of the MDF. We then review the problem of equifinality, whereby multiple combinations of parameters can produce similar model output. Fusing multiple independent data provides a means to limit equifinality. We then show how parameter probability density functions (PDFs from MDF can be used to interpret model process validity, and to propagate errors into model outputs. Posterior parameter distributions are a useful way to assess the success of MDF, combined with a determination of whether model residuals are Gaussian. If the MDF scheme provides evidence for temporal variation in parameters, then that is indicative of a critical missing dynamic process. A comparison of parameter PDFs generated with the same model from multiple FLUXNET sites can provide insights into the concept and validity of plant functional types (PFT – we would expect similar parameter estimates among sites

  8. Accretion disk dynamics. α-viscosity in self-similar self-gravitating models

    Science.gov (United States)

    Kubsch, Marcus; Illenseer, Tobias F.; Duschl, Wolfgang J.

    2016-04-01

    Aims: We investigate the suitability of α-viscosity in self-similar models for self-gravitating disks with a focus on active galactic nuclei (AGN) disks. Methods: We use a self-similar approach to simplify the partial differential equations arising from the evolution equation, which are then solved using numerical standard procedures. Results: We find a self-similar solution for the dynamical evolution of self-gravitating α-disks and derive the significant quantities. In the Keplerian part of the disk our model is consistent with standard stationary α-disk theory, and self-consistent throughout the self-gravitating regime. Positive accretion rates throughout the disk demand a high degree of self-gravitation. Combined with the temporal decline of the accretion rate and its low amount, the model prohibits the growth of large central masses. Conclusions: α-viscosity cannot account for the evolution of the whole mass spectrum of super-massive black holes (SMBH) in AGN. However, considering the involved scales it seems suitable for modelling protoplanetary disks.

  9. Scaling and interaction of self-similar modes in models of high Reynolds number wall turbulence

    Science.gov (United States)

    Sharma, A. S.; Moarref, R.; McKeon, B. J.

    2017-03-01

    Previous work has established the usefulness of the resolvent operator that maps the terms nonlinear in the turbulent fluctuations to the fluctuations themselves. Further work has described the self-similarity of the resolvent arising from that of the mean velocity profile. The orthogonal modes provided by the resolvent analysis describe the wall-normal coherence of the motions and inherit that self-similarity. In this contribution, we present the implications of this similarity for the nonlinear interaction between modes with different scales and wall-normal locations. By considering the nonlinear interactions between modes, it is shown that much of the turbulence scaling behaviour in the logarithmic region can be determined from a single arbitrarily chosen reference plane. Thus, the geometric scaling of the modes is impressed upon the nonlinear interaction between modes. Implications of these observations on the self-sustaining mechanisms of wall turbulence, modelling and simulation are outlined.

  10. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Directory of Open Access Journals (Sweden)

    Jin Dai

    2014-01-01

    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  11. a Version-Similarity Based Trust Degree Computation Model for Crowdsourcing Geographic Data

    Science.gov (United States)

    Zhou, Xiaoguang; Zhao, Yijiang

    2016-06-01

    Quality evaluation and control has become the main concern of VGI. In this paper, trust is used as a proxy of VGI quality, a version-similarity based trust degree computation model for crowdsourcing geographic data is presented. This model is based on the assumption that the quality of VGI objects mainly determined by the professional skill and integrity (called reputation in this paper), and the reputation of the contributor is movable. The contributor's reputation is calculated using the similarity degree among the multi-versions for the same entity state. The trust degree of VGI object is determined by the trust degree of its previous version, the reputation of the last contributor and the modification proportion. In order to verify this presented model, a prototype system for computing the trust degree of VGI objects is developed by programming with Visual C# 2010. The historical data of Berlin of OpenStreetMap (OSM) are employed for experiments. The experimental results demonstrate that the quality of crowdsourcing geographic data is highly positive correlation with its trustworthiness. As the evaluation is based on version-similarity, not based on the direct subjective evaluation among users, the evaluation result is objective. Furthermore, as the movability property of the contributors' reputation is used in this presented method, our method has a higher assessment coverage than the existing methods.

  12. Accretion disk dynamics: {\\alpha}-viscosity in self-similar self-gravitating models

    CERN Document Server

    Kubsch, Marcus; Duschl, W J

    2016-01-01

    Aims: We investigate the suitability of {\\alpha}-viscosity in self-similar models for self-gravitating disks with a focus on active galactic nuclei (AGN) disks. Methods: We use a self-similar approach to simplify the partial differential equations arising from the evolution equation, which are then solved using numerical standard procedures. Results: We find a self-similar solution for the dynamical evolution of self-gravitating {\\alpha}-disks and derive the significant quantities. In the Keplerian part of the disk our model is consistent with standard stationary {\\alpha}-disk theory, and self-consistent throughout the self-gravitating regime. Positive accretion rates throughout the disk demand a high degree of self-gravitation. Combined with the temporal decline of the accretion rate and its low amount, the model prohibits the growth of large central masses. Conclusions: {\\alpha}-viscosity cannot account for the evolution of the whole mass spectrum of super-massive black holes (SMBH) in AGN. However, conside...

  13. State impulsive control strategies for a two-languages competitive model with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Nie, Lin-Fei; Teng, Zhi-Dong; Nieto, Juan J.; Jung, Il Hyo

    2015-07-01

    For reasons of preserving endangered languages, we propose, in this paper, a novel two-languages competitive model with bilingualism and interlinguistic similarity, where state-dependent impulsive control strategies are introduced. The novel control model includes two control threshold values, which are different from the previous state-dependent impulsive differential equations. By using qualitative analysis method, we obtain that the control model exhibits two stable positive order-1 periodic solutions under some general conditions. Moreover, numerical simulations clearly illustrate the main theoretical results and feasibility of state-dependent impulsive control strategies. Meanwhile numerical simulations also show that state-dependent impulsive control strategy can be applied to other general two-languages competitive model and obtain the desired result. The results indicate that the fractions of two competitive languages can be kept within a reasonable level under almost any circumstances. Theoretical basis for finding a new control measure to protect the endangered language is offered.

  14. Matched weight loss induced by sleeve gastrectomy or gastric bypass similarly improves metabolic function in obese subjects.

    Science.gov (United States)

    Bradley, David; Magkos, Faidon; Eagon, J Christopher; Varela, J Esteban; Gastaldelli, Amalia; Okunade, Adewole L; Patterson, Bruce W; Klein, Samuel

    2014-09-01

    The effects of marked weight loss, induced by Roux-en-Y gastric bypass (RYGB) or sleeve gastrectomy (SG) surgeries, on insulin sensitivity, β-cell function and the metabolic response to a mixed meal were evaluated. Fourteen nondiabetic insulin-resistant patients who were scheduled to undergo SG (n = 7) or RYGB (n = 7) procedures completed a hyperinsulinemic-euglycemic clamp procedure and a mixed-meal tolerance test before surgery and after losing ∼20% of their initial body weight. Insulin sensitivity (insulin-stimulated glucose disposal during a clamp procedure), oral glucose tolerance (postprandial plasma glucose area under the curve), and β-cell function (insulin secretion in relationship to insulin sensitivity) improved after weight loss, and were not different between surgical groups. The metabolic response to meal ingestion was similar after RYGB or SG, manifested by rapid delivery of ingested glucose into the systemic circulation and a large early postprandial increase in plasma glucose, insulin, and C-peptide concentrations in both groups. When matched on weight loss, RYGB and SG surgeries result in similar improvements in the two major factors involved in regulating plasma glucose homeostasis, insulin sensitivity and β-cell function in obese people without diabetes. © 2014 The Obesity Society.

  15. Modeling of 3D-structure for regular fragments of low similarity unknown structure proteins

    Institute of Scientific and Technical Information of China (English)

    Peng Zhihong; Chen Jie; Lin Xiwen; Sang Yanchao

    2007-01-01

    Because it is hard to search similar structure for low similarity unknown structure proteins dimefly from the Protein Data Bank(PDB)database,3D-structure is modeled in this paper for secondary structure regular fragments(α-Helices,β-Strands)of such proteins by the protein secondary structure prediction software,the Basic Local Alignment Search Tool(BLAST)and the side chain construction software SCWRL3.First.the protein secondary structure prediction software is adopted to extract secondary structure fragments from the unknown structure proteins.Then.regular fragments are regulated by BLAST based on comparative modeling,providing main chain configurations.Finally,SCWRL3 is applied to assemble side chains for regular fragments,so that 3D-structure of regular fragments of low similarity un known structure protein is obtained.Regular fragments of several neurotoxins ale used for test.Simulation results show that the prediction errors are less than 0.06nm for regular fragments less than 10 amino acids,implying the simpleness and effectiveness of the proposed method.

  16. Possible Implications of a Vortex Gas Model and Self-Similarity for Tornadogenesis and Maintenance

    CERN Document Server

    Dokken, Doug; Shvartsman, Misha; Běl\\'\\ik, Pavel; Potvin, Corey; Dahl, Brittany; McGover, Amy

    2014-01-01

    We describe tornado genesis and maintenance using the 3-dimensional vortex gas model presented in Chorin (1994). High-energy vortices with negative temperature in the sense of Onsager (1949) play an important role in the model. We speculate that the formation of high-temperature vortices is related to the helicity inherited as they form or tilt into the vertical. We also exploit the notion of self-similarity to justify power laws derived from observations of weak and strong tornadoes presented in Cai (2005), Wurman and Gill (2000), and Wurman and Alexander (2005). Analysis of a Bryan Cloud Model (CM1) simulation of a tornadic supercell reveals scaling consistent with the observational studies.

  17. Initial virtual flight test for a dynamically similar aircraft model with control augmentation system

    Directory of Open Access Journals (Sweden)

    Linliang Guo

    2017-04-01

    Full Text Available To satisfy the validation requirements of flight control law for advanced aircraft, a wind tunnel based virtual flight testing has been implemented in a low speed wind tunnel. A 3-degree-of-freedom gimbal, ventrally installed in the model, was used in conjunction with an actively controlled dynamically similar model of aircraft, which was equipped with the inertial measurement unit, attitude and heading reference system, embedded computer and servo-actuators. The model, which could be rotated around its center of gravity freely by the aerodynamic moments, together with the flow field, operator and real time control system made up the closed-loop testing circuit. The model is statically unstable in longitudinal direction, and it can fly stably in wind tunnel with the function of control augmentation of the flight control laws. The experimental results indicate that the model responds well to the operator’s instructions. The response of the model in the tests shows reasonable agreement with the simulation results. The difference of response of angle of attack is less than 0.5°. The effect of stability augmentation and attitude control law was validated in the test, meanwhile the feasibility of virtual flight test technique treated as preliminary evaluation tool for advanced flight vehicle configuration research was also verified.

  18. Modeling of locally self-similar processes using multifractional Brownian motion of Riemann-Liouville type.

    Science.gov (United States)

    Muniandy, S V; Lim, S C

    2001-04-01

    Fractional Brownian motion (FBM) is widely used in the modeling of phenomena with power spectral density of power-law type. However, FBM has its limitation since it can only describe phenomena with monofractal structure or a uniform degree of irregularity characterized by the constant Holder exponent. For more realistic modeling, it is necessary to take into consideration the local variation of irregularity, with the Holder exponent allowed to vary with time (or space). One way to achieve such a generalization is to extend the standard FBM to multifractional Brownian motion (MBM) indexed by a Holder exponent that is a function of time. This paper proposes an alternative generalization to MBM based on the FBM defined by the Riemann-Liouville type of fractional integral. The local properties of the Riemann-Liouville MBM (RLMBM) are studied and they are found to be similar to that of the standard MBM. A numerical scheme to simulate the locally self-similar sample paths of the RLMBM for various types of time-varying Holder exponents is given. The local scaling exponents are estimated based on the local growth of the variance and the wavelet scalogram methods. Finally, an example of the possible applications of RLMBM in the modeling of multifractal time series is illustrated.

  19. A Novel Similarity Measure to Induce Semantic Classes and Its Application for Language Model Adaptation in a Dialogue System

    Institute of Scientific and Technical Information of China (English)

    Ya-Li Li; Wei-Qun Xu; Yong-Hong Yan

    2012-01-01

    In this paper,we propose a novel co-occurrence probabilities based similarity measure for inducing semantic classes.Clustering with the new similarity measure outperforms the widely used distance based on Kullback-Leibler divergence in precision,recall and F1 evaluation.In our experiments,we induced semantic clases from unannotated in-domain corpus and then used the induced classes and structures to generate large in-domain corpus which was then used for language model adaptation.Character recognition rate was improved from 85.2% to 91%.We imply a new measure to solve the lack of domain data problem by first induction then generation for a dialogue system.

  20. Locally self-similar phase diagram of the disordered Potts model on the hierarchical lattice.

    Science.gov (United States)

    Anglès d'Auriac, J-Ch; Iglói, Ferenc

    2013-02-01

    We study the critical behavior of the random q-state Potts model in the large-q limit on the diamond hierarchical lattice with an effective dimensionality d(eff)>2. By varying the temperature and the strength of the frustration the system has a phase transition line between the paramagnetic and the ferromagnetic phases which is controlled by four different fixed points. According to our renormalization group study the phase boundary in the vicinity of the multicritical point is self-similar; it is well represented by a logarithmic spiral. We expect an infinite number of reentrances in the thermodynamic limit; consequently one cannot define standard thermodynamic phases in this region.

  1. Improving the measurement of semantic similarity between gene ontology terms and gene products: insights from an edge- and IC-based hybrid method.

    Directory of Open Access Journals (Sweden)

    Xiaomei Wu

    Full Text Available BACKGROUND: Explicit comparisons based on the semantic similarity of Gene Ontology terms provide a quantitative way to measure the functional similarity between gene products and are widely applied in large-scale genomic research via integration with other models. Previously, we presented an edge-based method, Relative Specificity Similarity (RSS, which takes the global position of relevant terms into account. However, edge-based semantic similarity metrics are sensitive to the intrinsic structure of GO and simply consider terms at the same level in the ontology to be equally specific nodes, revealing the weaknesses that could be complemented using information content (IC. RESULTS AND CONCLUSIONS: Here, we used the IC-based nodes to improve RSS and proposed a new method, Hybrid Relative Specificity Similarity (HRSS. HRSS outperformed other methods in distinguishing true protein-protein interactions from false. HRSS values were divided into four different levels of confidence for protein interactions. In addition, HRSS was statistically the best at obtaining the highest average functional similarity among human-mouse orthologs. Both HRSS and the groupwise measure, simGIC, are superior in correlation with sequence and Pfam similarities. Because different measures are best suited for different circumstances, we compared two pairwise strategies, the maximum and the best-match average, in the evaluation. The former was more effective at inferring physical protein-protein interactions, and the latter at estimating the functional conservation of orthologs and analyzing the CESSM datasets. In conclusion, HRSS can be applied to different biological problems by quantifying the functional similarity between gene products. The algorithm HRSS was implemented in the C programming language, which is freely available from http://cmb.bnu.edu.cn/hrss.

  2. Self-similarities of periodic structures for a discrete model of a two-gene system

    Energy Technology Data Exchange (ETDEWEB)

    Souza, S.L.T. de, E-mail: thomaz@ufsj.edu.br [Departamento de Física e Matemática, Universidade Federal de São João del-Rei, Ouro Branco, MG (Brazil); Lima, A.A. [Escola de Farmácia, Universidade Federal de Ouro Preto, Ouro Preto, MG (Brazil); Caldas, I.L. [Instituto de Física, Universidade de São Paulo, São Paulo, SP (Brazil); Medrano-T, R.O. [Departamento de Ciências Exatas e da Terra, Universidade Federal de São Paulo, Diadema, SP (Brazil); Guimarães-Filho, Z.O. [Aix-Marseille Univ., CNRS PIIM UMR6633, International Institute for Fusion Science, Marseille (France)

    2012-03-12

    We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.

  3. Modeling a Sensor to Improve Its Efficacy

    Directory of Open Access Journals (Sweden)

    Nabin K. Malakar

    2013-01-01

    Full Text Available Robots rely on sensors to provide them with information about their surroundings. However, high-quality sensors can be extremely expensive and cost-prohibitive. Thus many robotic systems must make due with lower-quality sensors. Here we demonstrate via a case study how modeling a sensor can improve its efficacy when employed within a Bayesian inferential framework. As a test bed we employ a robotic arm that is designed to autonomously take its own measurements using an inexpensive LEGO light sensor to estimate the position and radius of a white circle on a black field. The light sensor integrates the light arriving from a spatially distributed region within its field of view weighted by its spatial sensitivity function (SSF. We demonstrate that by incorporating an accurate model of the light sensor SSF into the likelihood function of a Bayesian inference engine, an autonomous system can make improved inferences about its surroundings. The method presented here is data based, fairly general, and made with plug-and-play in mind so that it could be implemented in similar problems.

  4. Measuring similarity and improving stability in biomarker identification methods applied to Fourier-transform infrared (FTIR) spectroscopy.

    Science.gov (United States)

    Trevisan, Júlio; Park, Juhyun; Angelov, Plamen P; Ahmadzai, Abdullah A; Gajjar, Ketan; Scott, Andrew D; Carmichael, Paul L; Martin, Francis L

    2014-04-01

    FTIR spectroscopy is a powerful diagnostic tool that can also derive biochemical signatures of a wide range of cellular materials, such as cytology, histology, live cells, and biofluids. However, while classification is a well-established subject, biomarker identification lacks standards and validation of its methods. Validation of biomarker identification methods is difficult because, unlike classification, there is usually no reference biomarker against which to test the biomarkers extracted by a method. In this paper, we propose a framework to assess and improve the stability of biomarkers derived by a method, and to compare biomarkers derived by different method set-ups and between different methods by means of a proposed "biomarkers similarity index".

  5. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li

    2011-01-01

    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  6. Performance Modeling and Approximate Analysis of Multiserver Multiqueue Systems with Poisson and Self-similar Arrivals

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The problem of state space explosion is still an outstanding challenge in Markovian performance analysis for multiserver multiqueue (MSMQ) systems. The system behavior of a MSMQ system is described using stochastic high-level Petri net (SHLPN) models, and an approximate performance analysis technique is proposed based on decomposition and refinement methods as well as iteration technique. A real MSMQ system, Web-server cluster, is investigated. The performance of an integrated scheme of request dispatching and scheduling is analyzed with both Poisson and self-similar request arrivals. The study shows that the approximate analysis technique significantly reduces the complexity of the model solution and is also efficient for accuracy of numerical results.

  7. Document Representation and Clustering with WordNet Based Similarity Rough Set Model

    Directory of Open Access Journals (Sweden)

    Koichi Yamada

    2011-09-01

    Full Text Available Most studies on document clustering till date use Vector Space Model (VSM to represent documents in the document space, where documents are denoted by a vector in a word vector space. The standard VSM does not take into account the semantic relatedness between terms. Thus, terms with some semantic similarity are dealt with in the same way as terms with no semantic relatedness. Since this unconcern about semantics reduces the quality of clustering results, many studies have proposed various approaches to introduce knowledge of semantic relatedness into VSM model. Those approaches give better results than the standard VSM. However they still have their own issues. We propose a new approach as a combination of two approaches, one of which uses Rough Sets theory and co-occurrence of terms, and the other uses WordNet knowledge to solve these issues. Experiments for its evaluation show advantage of the proposed approach over the others.

  8. Modeling the self-organization of vocabularies under phonological similarity effects

    CERN Document Server

    Vera, Javier

    2016-01-01

    This work develops a computational model (by Automata Networks) of short-term memory constraints involved in the formation of linguistic conventions on artificial populations of speakers. The individuals confound phonologically similar words according to a predefined parameter. The main hypothesis of this paper is that there is a critical range of working memory capacities, in particular, a critical phonological degree of confusion, which implies drastic changes in the final consensus of the entire population. A theoretical result proves the convergence of a particular case of the model. Computer simulations describe the evolution of an energy function that measures the amount of local agreement between individuals. The main finding is the appearance of sudden changes in the energy function at critical parameters. Finally, the results are related to previous work on the absence of stages in the formation of languages.

  9. Hard state of the urban canopy layer turbulence and its self-similar multiplicative cascade models

    Institute of Scientific and Technical Information of China (English)

    HU; Fei; CHENG; Xueling; ZHAO; Songnian; QUAN; Lihong

    2005-01-01

    It is found by experiment that under the thermal convection condition, the temperature fluctuation in the urban canopy layer turbulence has the hard state character, and the temperature difference between two points has the exponential probability density function distribution. At the same time, the turbulent energy dissipation rate fits the log-normal distribution, and is in accord with the hypothesis proposed by Kolmogorov in 1962 and lots of reported experimental results. In this paper, the scaling law of hard state temperature n order structure function is educed by the self-similar multiplicative cascade models. The theory formula is Sn = n/3μ{n(n+6)/72+[2lnn!-nln2]/2ln6}, and μ Is intermittent exponent. The formula can fit the experimental results up to order 8 exponents, is superior to the predictions by the Kolmogorov theory, the β And log-normal model.

  10. Improved Multi-strategy Concept Similarity Calculation Method%改进的多策略的概念相似度计算方法

    Institute of Scientific and Technical Information of China (English)

    孙海真; 谢颖华

    2015-01-01

    Ontology mapping is the key technology to solve the heterogeneous problem among ontology, and concept similarity computation is a crucial step in ontology mapping process. In view of the problems existed in ontology mapping, an improved multi-strategy ontology mapping model is proposed. The most related concepts are filtered based on similarity of concept names. Then we calculate concept similarity by combining property-based similarity, structure-based similarity and instance-based similarity. Finally, we use the test set called benchmark provided by OAEI to test the performance of our mapping algorithm. The experimental results show that this method improves the recall and precision of ontology mapping while maintaining mapping efficiency and currency.%本体映射是解决本体异构的有效手段,而概念相似度计算是本体映射的关键环节。针对目前本体映射中概念相似度计算存在的问题,提出一种改进的多策略的概念相似度计算方法。首先根据两个概念的名称相似度进行初始映射判断,然后基于概念的属性、结构、实例计算概念相似度,并选取适当的权值进行加权综合。最后采用OAEI提供的标准数据测试集benchmark进行实验。实验结果表明,该方法在保证映射效率和通用性的同时,提高了映射结果的查全率和查准率。

  11. Overall Quality of Life Improves to Similar Levels after Mechanical Circulatory Support Regardless of Severity of Heart Failure before Implantation

    Science.gov (United States)

    Grady, Kathleen L; Naftel, David; Stevenson, Lynne; Dew, Mary Amanda; Weidner, Gerdi; Pagani, Francis D.; Kirklin, James K; Myers, Susan; Baldwin, Timothy; Young, James

    2014-01-01

    Background Pre implant heart failure severity may affect post implant health-related quality of life (HRQOL). The purpose of our study was to examine differences in HRQOL from before mechanical circulatory support (MCS) through 1 year after surgery, by INTERMACS patient profiles. Methods Data from adult patients with advanced heart failure who received primary continuous flow pumps between 6/23/06 – 3/31/10 and were enrolled in INTERMACS (n=1,559) were analyzed. HRQOL data were collected using the EQ-5D-3L survey pre implant and at 3, 6 and 12 months after implant. Statistical analyses included chi square and t-tests, using all available data for each time period. Paired ttests and sensitivity analyses were also conducted. Results Quality of life was poor before MCS implant among patients with INTERMACS profiles 1–7 and significantly improved after MCS for all profiles. Stratified by INTERMACS profile, problems within each of the five dimensions of HRQOL (i.e., mobility, self-care, usual activities, pain, and anxiety / depression) generally decreased from before to after implant. By six months after implant, patients with all INTERMACS profiles reported similar frequencies of problems for all HRQOL dimensions. Paired ttests and sensitivity analyses supported the vast majority of our findings. Conclusions HRQOL is poor among advanced heart failure patients with INTERMACS profiles 1–7 before MCS implantation and improves to similar levels for patients who remained on MCS 1 year after surgery. Patients have problems in HRQOL dimensions before and after MCS; the frequency of reporting problems decreases for all dimensions within most profiles across time. PMID:24360203

  12. Clusterwise HICLAS: a generic modeling strategy to trace similarities and differences in multiblock binary data.

    Science.gov (United States)

    Wilderjans, T F; Ceulemans, E; Kuppens, P

    2012-06-01

    In many areas of the behavioral sciences, different groups of objects are measured on the same set of binary variables, resulting in coupled binary object × variable data blocks. Take, as an example, success/failure scores for different samples of testees, with each sample belonging to a different country, regarding a set of test items. When dealing with such data, a key challenge consists of uncovering the differences and similarities between the structural mechanisms that underlie the different blocks. To tackle this challenge for the case of a single data block, one may rely on HICLAS, in which the variables are reduced to a limited set of binary bundles that represent the underlying structural mechanisms, and the objects are given scores for these bundles. In the case of multiple binary data blocks, one may perform HICLAS on each data block separately. However, such an analysis strategy obscures the similarities and, in the case of many data blocks, also the differences between the blocks. To resolve this problem, we proposed the new Clusterwise HICLAS generic modeling strategy. In this strategy, the different data blocks are assumed to form a set of mutually exclusive clusters. For each cluster, different bundles are derived. As such, blocks belonging to the same cluster have the same bundles, whereas blocks of different clusters are modeled with different bundles. Furthermore, we evaluated the performance of Clusterwise HICLAS by means of an extensive simulation study and by applying the strategy to coupled binary data regarding emotion differentiation and regulation.

  13. Achieving Full Dynamic Similarity with Small-Scale Wind Turbine Models

    Science.gov (United States)

    Miller, Mark; Kiefer, Janik; Westergaard, Carsten; Hultmark, Marcus

    2016-11-01

    Power and thrust data as a function of Reynolds number and Tip Speed Ratio are presented at conditions matching those of a full scale turbine. Such data has traditionally been very difficult to acquire due to the large length-scales of wind turbines, and the limited size of conventional wind tunnels. Ongoing work at Princeton University employs a novel, high-pressure wind tunnel (up to 220 atmospheres of static pressure) which uses air as the working fluid. This facility allows adjustment of the Reynolds number (via the fluid density) independent of the Tip Speed Ratio, up to a Reynolds number (based on chord and velocity at the tip) of over 3 million. Achieving dynamic similarity using this approach implies very high power and thrust loading, which results in mechanical loads greater than 200 times those experienced by a similarly sized model in a conventional wind tunnel. In order to accurately report the power coefficients, a series of tests were carried out on a specially designed model turbine drive-train using an external testing bench to replicate tunnel loading. An accurate map of the drive-train performance at various operating conditions was determined. Finally, subsequent corrections to the power coefficient are discussed in detail. Supported by: National Science Foundation Grant CBET-1435254 (program director Gregory Rorrer).

  14. Analysis and Modeling of Time-Correlated Characteristics of Rainfall-Runoff Similarity in the Upstream Red River Basin

    Directory of Open Access Journals (Sweden)

    Xiuli Sang

    2012-01-01

    Full Text Available We constructed a similarity model (based on Euclidean distance between rainfall and runoff to study time-correlated characteristics of rainfall-runoff similar patterns in the upstream Red River Basin and presented a detailed evaluation of the time correlation of rainfall-runoff similarity. The rainfall-runoff similarity was used to determine the optimum similarity. The results showed that a time-correlated model was found to be capable of predicting the rainfall-runoff similarity in the upstream Red River Basin in a satisfactory way. Both noised and denoised time series by thresholding the wavelet coefficients were applied to verify the accuracy of model. And the corresponding optimum similar sets obtained as the equation solution conditions showed an interesting and stable trend. On the whole, the annual mean similarity presented a gradually rising trend, for quantitatively estimating comprehensive influence of climate change and of human activities on rainfall-runoff similarity.

  15. Improved Trailing Edge Noise Model

    DEFF Research Database (Denmark)

    Bertagnolio, Franck

    2012-01-01

    The modeling of the surface pressure spectrum under a turbulent boundary layer is investigated in the presence of an adverse pressure gradient along the flow direction. It is shown that discrepancies between measurements and results from a well-known model increase as the pressure gradient...... increases. This model is modified by introducing anisotropy in the definition of the vertical velocity component spectrum across the boundary layer. The degree of anisotropy is directly related to the strength of the pressure gradient. It is shown that by appropriately normalizing the pressure gradient...... and by tuning the anisotropy factor, experimental results can be closely reproduced by the modified model. In this section, the original TNO-Blake model is modified in order to account for the effects of a pressure gradient through turbulence anisotropy. The model results are compared with measurements...

  16. A Stabilized Scale-Similarity Model for Explicitly-Filtered LES

    Science.gov (United States)

    Edoh, Ayaboe; Karagozian, Ann; Sankaran, Venkateswaran

    2016-11-01

    Accurate simulation of the filtered-scales in LES is affected by the competing presence of modeling and discretization errors. In order to properly assess modeling techniques, it is imperative to minimize the influence of the numerical scheme. The current investigation considers the inclusion of resolved and un-resolved sub-filter stress ([U]RSFS) components in the governing equations, which is suggestive of a mixed-model approach. Taylor-series expansions of discrete filter stencils are used to inform proper scaling of a Scale-Similarity model representation of the RSFS term, and accompanying stabilization is provided by tunable and scale-discriminant filter-based artificial dissipation techniques that represent the URSFS term implicitly. Effective removal of numerical error from the LES solution is studied with respect to the 1D Burgers equation with synthetic turbulence, and extension to 3D Navier-Stokes system computations is motivated. Distribution A: Approved for public release, distribution unlimited. Supported by AFOSR (PMs: Drs. Chiping Li and Michael Kendra).

  17. Self-Similar Models for the Mass Profiles of Early-type Lens Galaxies

    CERN Document Server

    Rusin, D; Keeton, C R

    2003-01-01

    We introduce a self-similar mass model for early-type galaxies, and constrain it using the aperture mass-radius relations determined from the geometries of 22 gravitational lenses. The model consists of two components: a concentrated component which traces the light distribution, and a more extended power-law component (rho propto r^-n) which represents the dark matter. We find that lens galaxies have total mass profiles which are nearly isothermal, or slightly steeper, on the several-kiloparsec radial scale spanned by the lensed images. In the limit of a single-component, power-law radial profile, the model implies n=2.07+/-0.13, consistent with isothermal (n=2). Models in which mass traces light are excluded at >99 percent confidence. An n=1 cusp (such as the Navarro-Frenk-White profile) requires a projected dark matter mass fraction of f_cdm = 0.22+/-0.10 inside 2 effective radii. These are the best statistical constraints yet obtained on the mass profiles of lenses, and provide clear evidence for a small ...

  18. Uncovering highly obfuscated plagiarism cases using fuzzy semantic-based similarity model

    Directory of Open Access Journals (Sweden)

    Salha M. Alzahrani

    2015-07-01

    Full Text Available Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and suspicious texts of short lengths, which implement the semantic relatedness between words as a membership function to a fuzzy set. In order to minimize the number of false positives and false negatives, a learning method that combines a permission threshold and a variation threshold is used to decide true plagiarism cases. The proposed model and the baselines are evaluated on 99,033 ground-truth annotated cases extracted from different datasets, including 11,621 (11.7% handmade paraphrases, 54,815 (55.4% artificial plagiarism cases, and 32,578 (32.9% plagiarism-free cases. We conduct extensive experimental verifications, including the study of the effects of different segmentations schemes and parameter settings. Results are assessed using precision, recall, F-measure and granularity on stratified 10-fold cross-validation data. The statistical analysis using paired t-tests shows that the proposed approach is statistically significant in comparison with the baselines, which demonstrates the competence of fuzzy semantic-based model to detect plagiarism cases beyond the literal plagiarism. Additionally, the analysis of variance (ANOVA statistical test shows the effectiveness of different segmentation schemes used with the proposed approach.

  19. An analytic solution of a model of language competition with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Otero-Espinar, M. V.; Seoane, L. F.; Nieto, J. J.; Mira, J.

    2013-12-01

    An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also suggest ways to further improve the model and facilitate the comparison of its consequences with those from other approaches or with real data.

  20. Improved model for statistical alignment

    Energy Technology Data Exchange (ETDEWEB)

    Miklos, I.; Toroczkai, Z. (Zoltan)

    2001-01-01

    The statistical approach to molecular sequence evolution involves the stochastic modeling of the substitution, insertion and deletion processes. Substitution has been modeled in a reliable way for more than three decades by using finite Markov-processes. Insertion and deletion, however, seem to be more difficult to model, and thc recent approaches cannot acceptably deal with multiple insertions and deletions. A new method based on a generating function approach is introduced to describe the multiple insertion process. The presented algorithm computes the approximate joint probability of two sequences in 0(13) running time where 1 is the geometric mean of the sequence lengths.

  1. The synaptonemal complex of basal metazoan hydra: more similarities to vertebrate than invertebrate meiosis model organisms.

    Science.gov (United States)

    Fraune, Johanna; Wiesner, Miriam; Benavente, Ricardo

    2014-03-20

    The synaptonemal complex (SC) is an evolutionarily well-conserved structure that mediates chromosome synapsis during prophase of the first meiotic division. Although its structure is conserved, the characterized protein components in the current metazoan meiosis model systems (Drosophila melanogaster, Caenorhabditis elegans, and Mus musculus) show no sequence homology, challenging the question of a single evolutionary origin of the SC. However, our recent studies revealed the monophyletic origin of the mammalian SC protein components. Many of them being ancient in Metazoa and already present in the cnidarian Hydra. Remarkably, a comparison between different model systems disclosed a great similarity between the SC components of Hydra and mammals while the proteins of the ecdysozoan systems (D. melanogaster and C. elegans) differ significantly. In this review, we introduce the basal-branching metazoan species Hydra as a potential novel invertebrate model system for meiosis research and particularly for the investigation of SC evolution, function and assembly. Also, available methods for SC research in Hydra are summarized.

  2. Study and application of monitoring plane displacement of a similarity model based on time-series images

    Institute of Scientific and Technical Information of China (English)

    Xu Jiankun; Wang Enyuan; Li Zhonghui; Wang Chao

    2011-01-01

    In order to compensate for the deficiency of present methods of monitoring plane displacement in similarity model tests,such as inadequate real-time monitoring and more manual intervention,an effective monitoring method was proposed in this study,and the major steps of the monitoring method include:firstly,time-series images of the similarity model in the test were obtained by a camera,and secondly,measuring points marked as artificial targets were automatically tracked and recognized from time-series images.Finally,the real-time plane displacement field was calculated by the fixed magnification between objects and images under the specific conditions.And then the application device of the method was designed and tested.At the same time,a sub-pixel location method and a distortion error model were used to improve the measuring accuracy.The results indicate that this method may record the entire test,especially the detailed non-uniform deformation and sudden deformation.Compared with traditional methods this method has a number of advantages,such as greater measurement accuracy and reliability,less manual intervention,higher automation,strong practical properties,much more measurement information and so on.

  3. 3D simulations of disc-winds extending radially self-similar MHD models

    CERN Document Server

    Stute, Matthias; Vlahakis, Nektarios; Tsinganos, Kanaris; Mignone, Andrea; Massaglia, Silvano

    2014-01-01

    Disc-winds originating from the inner parts of accretion discs are considered as the basic component of magnetically collimated outflows. The only available analytical MHD solutions to describe disc-driven jets are those characterized by the symmetry of radial self-similarity. However, radially self-similar MHD jet models, in general, have three geometrical shortcomings, (i) a singularity at the jet axis, (ii) the necessary assumption of axisymmetry, and (iii) the non-existence of an intrinsic radial scale, i.e. the jets formally extend to radial infinity. Hence, numerical simulations are necessary to extend the analytical solutions towards the axis, by solving the full three-dimensional equations of MHD and impose a termination radius at finite radial distance. We focus here on studying the effects of relaxing the (ii) assumption of axisymmetry, i.e. of performing full 3D numerical simulations of a disc-wind crossing all magnetohydrodynamic critical surfaces. We compare the results of these runs with previou...

  4. A developmental model for similarities and dissimilarities between schizophrenia and bipolar disorder.

    Science.gov (United States)

    Murray, Robin M; Sham, Pak; Van Os, Jim; Zanelli, Jolanta; Cannon, Mary; McDonald, Colm

    2004-12-01

    Schizophrenia and mania have a number of symptoms and epidemiological characteristics in common, and both respond to dopamine blockade. Family, twin and molecular genetic studies suggest that the reason for these similarities may be that the two conditions share certain susceptibility genes. On the other hand, individuals with schizophrenia have more obvious brain structural and neuropsychological abnormalities than those with bipolar disorder; and pre-schizophrenic children are characterised by cognitive and neuromotor impairments, which are not shared by children who later develop bipolar disorder. Furthermore, the risk-increasing effect of obstetric complications has been demonstrated for schizophrenia but not for bipolar disorder. Perinatal complications such as hypoxia are known to result in smaller volume of the amygdala and hippocampus, which have been frequently reported to be reduced in schizophrenia; familial predisposition to schizophrenia is also associated with decreased volume of these structures. We suggest a model to explain the similarities and differences between the disorders and propose that, on a background of shared genetic predisposition to psychosis, schizophrenia, but not bipolar disorder, is subject to additional genes or early insults, which impair neurodevelopment, especially of the medial temporal lobe.

  5. Similar pattern of peripheral neuropathy in mouse models of type 1 diabetes and Alzheimer's disease.

    Science.gov (United States)

    Jolivalt, C G; Calcutt, N A; Masliah, E

    2012-01-27

    There is an increasing awareness that diabetes has an impact on the CNS and that diabetes is a risk factor for Alzheimer's disease (AD). Links between AD and diabetes point to impaired insulin signaling as a common mechanism leading to defects in the brain. However, diabetes is predominantly characterized by peripheral, rather than central, neuropathy, and despite the common central mechanisms linking AD and diabetes, little is known about the effect of AD on the peripheral nervous system (PNS). In this study, we compared indexes of peripheral neuropathy and investigated insulin signaling in the sciatic nerve of insulin-deficient mice and amyloid precursor protein (APP) overexpressing transgenic mice. Insulin-deficient and APP transgenic mice displayed similar patterns of peripheral neuropathy with decreased motor nerve conduction velocity, thermal hypoalgesia, and loss of tactile sensitivity. Phosphorylation of the insulin receptor and glycogen synthase kinase 3β (GSK3β) was similarly affected in insulin-deficient and APP transgenic mice despite significantly different blood glucose and plasma insulin levels, and nerve of both models showed accumulation of Aβ-immunoreactive protein. Although diabetes and AD have different primary etiologies, both diseases share many abnormalities in both the brain and the PNS. Our data point to common deficits in the insulin-signaling pathway in both neurodegenerative diseases and support the idea that AD may cause disorders outside the higher CNS.

  6. Similarity dark energy models in Bianchi type -I space-time

    CERN Document Server

    Ali, Ahmad T; Alzahrani, Abdulah K

    2015-01-01

    We investigate some new similarity solutions of anisotropic dark energy and perfect fluid in Bianchi type-I space-time. Three different time dependent skewness parameters along the spatial directions are introduced to quantify the deviation of pressure from isotropy. We consider the case when the dark energy is minimally coupled to the perfect fluid as well as direct interaction with it. The Lie symmetry generators that leave the equation invariant are identified and we generate an optimal system of one-dimensional subalgebras. Each element of the optimal system is used to reduce the partial differential equation to an ordinary differential equation which is further analyzed. We solve the Einstein field equations, described by a system of non-linear partial differential equations (NLPDEs), by using the Lie point symmetry analysis method. The geometrical and kinematical features of the models and the behavior of the anisotropy of dark energy, are examined in detail.

  7. Generalized Fractional Master Equation for Self-Similar Stochastic Processes Modelling Anomalous Diffusion

    Directory of Open Access Journals (Sweden)

    Gianni Pagnini

    2012-01-01

    inhomogeneity and nonstationarity properties of the medium. For instance, when this superposition is applied to the time-fractional diffusion process, the resulting Master Equation emerges to be the governing equation of the Erdélyi-Kober fractional diffusion, that describes the evolution of the marginal distribution of the so-called generalized grey Brownian motion. This motion is a parametric class of stochastic processes that provides models for both fast and slow anomalous diffusion: it is made up of self-similar processes with stationary increments and depends on two real parameters. The class includes the fractional Brownian motion, the time-fractional diffusion stochastic processes, and the standard Brownian motion. In this framework, the M-Wright function (known also as Mainardi function emerges as a natural generalization of the Gaussian distribution, recovering the same key role of the Gaussian density for the standard and the fractional Brownian motion.

  8. A Unified Framework for Systematic Model Improvement

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2003-01-01

    A unified framework for improving the quality of continuous time models of dynamic systems based on experimental data is presented. The framework is based on an interplay between stochastic differential equation (SDE) modelling, statistical tests and multivariate nonparametric regression...

  9. More Similar than Different? Exploring Cultural Models of Depression among Latino Immigrants in Florida

    Directory of Open Access Journals (Sweden)

    Dinorah (Dina Martinez Tyson

    2011-01-01

    Full Text Available The Surgeon General's report, “Culture, Race, and Ethnicity: A Supplement to Mental Health,” points to the need for subgroup specific mental health research that explores the cultural variation and heterogeneity of the Latino population. Guided by cognitive anthropological theories of culture, we utilized ethnographic interviewing techniques to explore cultural models of depression among foreign-born Mexican (n=30, Cuban (n=30, Columbian (n=30, and island-born Puerto Ricans (n=30, who represent the largest Latino groups in Florida. Results indicate that Colombian, Cuban, Mexican, and Puerto Rican immigrants showed strong intragroup consensus in their models of depression causality, symptoms, and treatment. We found more agreement than disagreement among all four groups regarding core descriptions of depression, which was largely unexpected but can potentially be explained by their common immigrant experiences. Findings expand our understanding about Latino subgroup similarities and differences in their conceptualization of depression and can be used to inform the adaptation of culturally relevant interventions in order to better serve Latino immigrant communities.

  10. Lévy Flights and Self-Similar Exploratory Behaviour of Termite Workers: Beyond Model Fitting

    Science.gov (United States)

    Miramontes, Octavio; DeSouza, Og; Paiva, Leticia Ribeiro; Marins, Alessandra; Orozco, Sirio

    2014-01-01

    Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties –including Lévy flights– in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale. PMID:25353958

  11. Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.

    Directory of Open Access Journals (Sweden)

    Octavio Miramontes

    Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.

  12. Similarity on neural stem cells and brain tumor stem cells in transgenic brain tumor mouse models

    Institute of Scientific and Technical Information of China (English)

    Guanqun Qiao; Qingquan Li; Gang Peng; Jun Ma; Hongwei Fan; Yingbin Li

    2013-01-01

    Although it is believed that glioma is derived from brain tumor stem cells, the source and molecular signal pathways of these cells are stil unclear. In this study, we used stable doxycycline-inducible transgenic mouse brain tumor models (c-myc+/SV40Tag+/Tet-on+) to explore the malignant trans-formation potential of neural stem cells by observing the differences of neural stem cel s and brain tumor stem cells in the tumor models. Results showed that chromosome instability occurred in brain tumor stem cells. The numbers of cytolysosomes and autophagosomes in brain tumor stem cells and induced neural stem cel s were lower and the proliferative activity was obviously stronger than that in normal neural stem cells. Normal neural stem cells could differentiate into glial fibril ary acidic protein-positive and microtubule associated protein-2-positive cells, which were also negative for nestin. However, glial fibril ary acidic protein/nestin, microtubule associated protein-2/nestin, and glial fibril ary acidic protein/microtubule associated protein-2 double-positive cells were found in induced neural stem cells and brain tumor stem cel s. Results indicate that induced neural stem cells are similar to brain tumor stem cells, and are possibly the source of brain tumor stem cells.

  13. Improving Agent Based Modeling of Critical Incidents

    Directory of Open Access Journals (Sweden)

    Robert Till

    2010-04-01

    Full Text Available Agent Based Modeling (ABM is a powerful method that has been used to simulate potential critical incidents in the infrastructure and built environments. This paper will discuss the modeling of some critical incidents currently simulated using ABM and how they may be expanded and improved by using better physiological modeling, psychological modeling, modeling the actions of interveners, introducing Geographic Information Systems (GIS and open source models.

  14. A monkey model of acetaminophen-induced hepatotoxicity; phenotypic similarity to human.

    Science.gov (United States)

    Tamai, Satoshi; Iguchi, Takuma; Niino, Noriyo; Mikamoto, Kei; Sakurai, Ken; Sayama, Ayako; Shimoda, Hitomi; Takasaki, Wataru; Mori, Kazuhiko

    2017-01-01

    Species-specific differences in the hepatotoxicity of acetaminophen (APAP) have been shown. To establish a monkey model of APAP-induced hepatotoxicity, which has not been previously reported, APAP at doses up to 2,000 mg/kg was administered orally to fasting male and female cynomolgus monkeys (n = 3-5/group) pretreated intravenously with or without 300 mg/kg of the glutathione biosynthesis inhibitor, L-buthionine-(S,R)-sulfoximine (BSO). In all the animals, APAP at 2,000 mg/kg with BSO but not without BSO induced hepatotoxicity, which was characterized histopathologically by centrilobular necrosis and vacuolation of hepatocytes. Plasma levels of APAP and its reactive metabolite N-acethyl-p-benzoquinone imine (NAPQI) increased 4 to 7 hr after the APAP treatment. The mean Cmax level of APAP at 2,000 mg/kg with BSO was approximately 200 µg/mL, which was comparable to high-risk cutoff value of the Rumack-Matthew nomogram. Interestingly, plasma alanine aminotransferase (ALT) did not change until 7 hr and increased 24 hr or later after the APAP treatment, indicating that this phenotypic outcome was similar to that in humans. In addition, circulating liver-specific miR-122 and miR-192 levels also increased 24 hr or later compared with ALT, suggesting that circulating miR-122 and miR-192 may serve as potential biomarkers to detect hepatotoxicity in cynomolgus monkeys. These results suggest that the hepatotoxicity induced by APAP in the monkey model shown here was translatable to humans in terms of toxicokinetics and its toxic nature, and this model would be useful to investigate mechanisms of drug-induced liver injury and also potential translational biomarkers in humans.

  15. System-approach methods for modeling and testing similarity of in vitro dissolutions of drug dosage formulations.

    Science.gov (United States)

    Dedík, Ladislav; Durisová, Mária

    2002-07-01

    System-approach based modeling methods are used to model dynamic systems describing in vitro dissolutions of drug dosage formulations. Employing the models of these systems, model-dependent criteria are proposed for testing similarity between in vitro dissolutions of different drug dosage formulations. The criteria proposed are exemplified and compared with the criterion called the similarity factor f(2), commonly used in the field of biomedicine. Advantages of the criteria proposed over this factor are presented.

  16. Study on similar model of high pressure water jet impacting coal rock

    Science.gov (United States)

    Liu, Jialiang; Wang, Mengjin; Zhang, Di

    2017-08-01

    Based on the similarity theory and dimensional analysis, the similarity criterion of the coal rock mechanical parameters were deduced. The similar materials were mainly built by the cement, sand, nitrile rubber powder and polystyrene, by controlling the water-cement ratio, cement-sand ratio, curing time and additives volume ratio. The intervals of the factors were obtained by carrying out series of material compression tests. By comparing the basic mechanical parameters such as the bulk density, compressive strength, Poisson ratio and elastic modulus between the coal rock prototype and similar materials, the optimal producing proposal of the coal rock similar materials was generated based on the orthogonal design tests finally.

  17. Olympic weightlifting and plyometric training with children provides similar or greater performance improvements than traditional resistance training.

    Science.gov (United States)

    Chaouachi, Anis; Hammami, Raouf; Kaabi, Sofiene; Chamari, Karim; Drinkwater, Eric J; Behm, David G

    2014-06-01

    A number of organizations recommend that advanced resistance training (RT) techniques can be implemented with children. The objective of this study was to evaluate the effectiveness of Olympic-style weightlifting (OWL), plyometrics, and traditional RT programs with children. Sixty-three children (10-12 years) were randomly allocated to a 12-week control OWL, plyometric, or traditional RT program. Pre- and post-training tests included body mass index (BMI), sum of skinfolds, countermovement jump (CMJ), horizontal jump, balance, 5- and 20-m sprint times, isokinetic force and power at 60 and 300° · s(-1). Magnitude-based inferences were used to analyze the likelihood of an effect having a standardized (Cohen's) effect size exceeding 0.20. All interventions were generally superior to the control group. Olympic weightlifting was >80% likely to provide substantially better improvements than plyometric training for CMJ, horizontal jump, and 5- and 20-m sprint times, whereas >75% likely to substantially exceed traditional RT for balance and isokinetic power at 300° · s(-1). Plyometric training was >78% likely to elicit substantially better training adaptations than traditional RT for balance, isokinetic force at 60 and 300° · s(-1), isokinetic power at 300° · s(-1), and 5- and 20-m sprints. Traditional RT only exceeded plyometric training for BMI and isokinetic power at 60° · s(-1). Hence, OWL and plyometrics can provide similar or greater performance adaptations for children. It is recommended that any of the 3 training modalities can be implemented under professional supervision with proper training progressions to enhance training adaptations in children.

  18. Hoxb8 conditionally immortalised macrophage lines model inflammatory monocytic cells with important similarity to dendritic cells.

    Science.gov (United States)

    Rosas, Marcela; Osorio, Fabiola; Robinson, Matthew J; Davies, Luke C; Dierkes, Nicola; Jones, Simon A; Reis e Sousa, Caetano; Taylor, Philip R

    2011-02-01

    We have examined the potential to generate bona fide macrophages (MØ) from conditionally immortalised murine bone marrow precursors. MØ can be derived from Hoxb8 conditionally immortalised macrophage precursor cell lines (MØP) using either M-CSF or GM-CSF. When differentiated in GM-CSF (GM-MØP) the resultant cells resemble GM-CSF bone marrow-derived dendritic cells (BMDC) in morphological phenotype, antigen phenotype and functional responses to microbial stimuli. In spite of this high similarity between the two cell types and the ability of GM-MØP to effectively present antigen to a T-cell hybridoma, these cells are comparatively poor at priming the expansion of IFN-γ responses from naïve CD4(+) T cells. The generation of MØP from transgenic or genetically aberrant mice provides an excellent opportunity to study the inflammatory role of GM-MØP, and reduces the need for mouse colonies in many studies. Hence differentiation of conditionally immortalised MØPs in GM-CSF represents a unique in vitro model of inflammatory monocyte-like cells, with important differences from bone marrow-derived dendritic cells, which will facilitate functional studies relating to the many 'sub-phenotypes' of inflammatory monocytes.

  19. Utilizing Statistical Semantic Similarity Techniques for Ontology Mapping——with Applications to AEC Standard Models

    Institute of Scientific and Technical Information of China (English)

    Pan Jiayi; Chin-Pang Jack Cheng; Gloria T. Lau; Kincho H. Law

    2008-01-01

    The objective of this paper is to introduce three semi-automated approaches for ontology mapping using relatedness analysis techniques. In the architecture, engineering, and construction (AEC) industry, there exist a number of ontological standards to describe the semantics of building models. Although the standards share similar scopes of interest, the task of comparing and mapping concepts among standards is challenging due to their differences in terminologies and perspectives. Ontology mapping is therefore necessary to achieve information interoperability, which allows two or more information sources to exchange data and to re-use the data for further purposes. The attribute-based approach, corpus-based approach, and name-based approach presented in this paper adopt the statistical relatedness analysis techniques to discover related concepts from heterogeneous ontologies. A pilot study is conducted on IFC and CIS/2 ontologies to evaluate the approaches. Preliminary results show that the attribute-based approach outperforms the other two approaches in terms of precision and F-measure.

  20. Using argumentation to retrieve articles with similar citations: an inquiry into improving related articles search in the MEDLINE digital library.

    Science.gov (United States)

    Tbahriti, Imad; Chichester, Christine; Lisacek, Frédérique; Ruch, Patrick

    2006-06-01

    The aim of this study is to investigate the relationships between citations and the scientific argumentation found abstracts. We design a related article search task and observe how the argumentation can affect the search results. We extracted citation lists from a set of 3200 full-text papers originating from a narrow domain. In parallel, we recovered the corresponding MEDLINE records for analysis of the argumentative moves. Our argumentative model is founded on four classes: PURPOSE, METHODS, RESULTS and CONCLUSION. A Bayesian classifier trained on explicitly structured MEDLINE abstracts generates these argumentative categories. The categories are used to generate four different argumentative indexes. A fifth index contains the complete abstract, together with the title and the list of Medical Subject Headings (MeSH) terms. To appraise the relationship of the moves to the citations, the citation lists were used as the criteria for determining relatedness of articles, establishing a benchmark; it means that two articles are considered as "related" if they share a significant set of co-citations. Our results show that the average precision of queries with the PURPOSE and CONCLUSION features is the highest, while the precision of the RESULTS and METHODS features was relatively low. A linear weighting combination of the moves is proposed, which significantly improves retrieval of related articles.

  1. Comparative modeling of the human monoamine transporters: similarities in substrate binding.

    Science.gov (United States)

    Koldsø, Heidi; Christiansen, Anja B; Sinning, Steffen; Schiøtt, Birgit

    2013-02-20

    The amino acid compositions of the substrate binding pockets of the three human monoamine transporters are compared as is the orientation of the endogenous substrates, serotonin, dopamine, and norepinephrine, bound in these. Through a combination of homology modeling, induced fit dockings, molecular dynamics simulations, and uptake experiments in mutant transporters, we propose a common binding mode for the three substrates. The longitudinal axis of the substrates is similarly oriented with these, forming an ionic interaction between the ammonium group and a highly conserved aspartate, Asp98 (serotonin transporter, hSERT), Asp79 (dopamine transporter, hDAT), and Asp75 (norepinephrine transporter, hNET). The 6-position of serotonin and the para-hydroxyl groups of dopamine and norepinephrine were found to face Ala173 in hSERT, Gly153 in hDAT, and Gly149 in hNET. Three rotations of the substrates around the longitudinal axis were identified. In each mode, an aromatic hydroxyl group of the substrates occupied equivalent volumes of the three binding pockets, where small changes in amino acid composition explains the differences in selectivity. Uptake experiments support that the 5-hydroxyl group of serotonin and the meta-hydroxyl group norepinephrine and dopamine are placed in the hydrophilic pocket around Ala173, Ser438, and Thr439 in hSERT corresponding to Gly149, Ser419, Ser420 in hNET and Gly153 Ser422 and Ala423 in hDAT. Furthermore, hDAT was found to possess an additional hydrophilic pocket around Ser149 to accommodate the para-hydroxyl group. Understanding these subtle differences between the binding site compositions of the three transporters is imperative for understanding the substrate selectivity, which could eventually aid in developing future selective medicines.

  2. Development of a simple model for batch and boundary information updation for a similar ship's block model

    Institute of Scientific and Technical Information of China (English)

    LEE Hyeon-deok; SON Myeong-jo; OH Min-jae; LEE Hyung-woo; KIM Tae-wan

    2012-01-01

    In early 2000,large domestic shipyards introduced shipbuilding 3D computer-aided design (CAD)to the hull production design process to define manufacturing and assembly information.The production design process accounts for most of the man-hours (M/H) of the entire design process and is closely connected to yard production because designs must take into account the production schedule of the shipyard,the current state of the dock needed to mount the ship's block,and supply information.Therefore,many shipyards are investigating the complete automation of the production design process to reduce the M/H for designers.However,these problems are still currently unresolved,and a clear direction is needed for research on the automatic design base of manufacturing rules,batches reflecting changed building specifications,batch updates of boundary information for hull members,and management of the hull model change history to automate the production design process.In this study,a process was developed to aid production design engineers in designing a new ship's hull block model from that of a similar ship previously built,based on AVEVA Marine.An automation system that uses the similar ship's hull block model is proposed to reduce M/H and human errors by the production design engineer.First,scheme files holding important information were constructed in a database to automatically update hull block model modifications.Second,for batch updates,the database's table,including building specifications and the referential integrity of a relational database were compared.In particular,this study focused on reflecting the frequent modification of building specifications and regeneration of boundary information of the adjacent panel due to changes in a specific panel.Third,the rollback function is proposed in which the database (DB) is used to return to the previously designed panels.

  3. Ethnic differences in the effects of media on body image: the effects of priming with ethnically different or similar models.

    Science.gov (United States)

    Bruns, Gina L; Carter, Michele M

    2015-04-01

    Media exposure has been positively correlated with body dissatisfaction. While body image concerns are common, being African American has been found to be a protective factor in the development of body dissatisfaction. Participants either viewed ten advertisements showing 1) ethnically-similar thin models; 2) ethnically-different thin models; 3) ethnically-similar plus-sized models; and 4) ethnically-diverse plus-sized models. Following exposure, body image was measured. African American women had less body dissatisfaction than Caucasian women. Ethnically-similar thin-model conditions did not elicit greater body dissatisfaction scores than ethnically-different thin or plus-sized models nor did the ethnicity of the model impact ratings of body dissatisfaction for women of either race. There were no differences among the African American women exposed to plus-sized versus thin models. Among Caucasian women exposure to plus-sized models resulted in greater body dissatisfaction than exposure to thin models. Results support existing literature that African American women experience less body dissatisfaction than Caucasian women even following exposure to an ethnically-similar thin model. Additionally, women exposed to plus-sized model conditions experienced greater body dissatisfaction than those shown thin models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. A Procedural Model for Process Improvement Projects

    OpenAIRE

    Kreimeyer, Matthias;Daniilidis, Charampos;Lindemann, Udo

    2017-01-01

    Process improvement projects are of a complex nature. It is therefore necessary to use experience and knowledge gained in previous projects when executing a new project. Yet, there are few pragmatic planning aids, and transferring the institutional knowledge from one project to the next is difficult. This paper proposes a procedural model that extends common models for project planning to enable staff on a process improvement project to adequately plan their projects, enabling them to documen...

  5. A Model for Comparative Analysis of the Similarity between Android and iOS Operating Systems

    Directory of Open Access Journals (Sweden)

    Lixandroiu R.

    2014-12-01

    Full Text Available Due to recent expansion of mobile devices, in this article we try to do an analysis of two of the most used mobile OSS. This analysis is made on the method of calculating Jaccard's similarity coefficient. To complete the analysis, we developed a hierarchy of factors in evaluating OSS. Analysis has shown that the two OSS are similar in terms of functionality, but there are a number of factors that weighted make a difference.

  6. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    2008-01-01

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship qualit

  7. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship

  8. An improved model of Robinson equivalent circuit analytical model

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The Robinson equivalent circuit analytical model can be used only in calculating shielding effectiveness of enclosure with the same multi-holes in one wall, but cannot be used in different multi-holes in two walls. According to the practical requirement, this article uses Konefal’s and Farhana’s characteristic impedance of apertures to improve the equivalent circuit analytical model in different multi-holes in two walls. The improved equivalent circuit analytical model is more useful than Robinson equivalent circuit analytical model. In the article, all kinds of enclosures are simulated by TLM (Transmission-Line Matrix method) to prove that this improved model is feasible in multimode.

  9. RAINDROPS FROM MOTIONLESS CLOUD USING PROPORTIONALITY AND GEOMETRIC SIMILARITY IN MATHEMATICAL MODELING

    OpenAIRE

    Dr. S. Jayakumar; Geetha, S.

    2017-01-01

    Mathematical Model is an idealization of the real world Phenomenon and never a completely accurate representation. Any Model has its limitations a good one can provide valuable results and conclusions. Mathematical Model as a mathematical construct designed to study a particular real world systems or behavior of Interest. The Model allows us to reach mathematical conclusions about the behavior; These conclusions can be interpreted to help a decision maker plan for the future. Most models simp...

  10. Concepts of relative sample outlier (RSO) and weighted sample similarity (WSS) for improving performance of clustering genes: co-function and co-regulation.

    Science.gov (United States)

    Bhattacharya, Anindya; Chowdhury, Nirmalya; De, Rajat K

    2015-01-01

    Performance of clustering algorithms is largely dependent on selected similarity measure. Efficiency in handling outliers is a major contributor to the success of a similarity measure. Better the ability of similarity measure in measuring similarity between genes in the presence of outliers, better will be the performance of the clustering algorithm in forming biologically relevant groups of genes. In the present article, we discuss the problem of handling outliers with different existing similarity measures and introduce the concepts of Relative Sample Outlier (RSO). We formulate new similarity, called Weighted Sample Similarity (WSS), incorporated in Euclidean distance and Pearson correlation coefficient and then use them in various clustering and biclustering algorithms to group different gene expression profiles. Our results suggest that WSS improves performance, in terms of finding biologically relevant groups of genes, of all the considered clustering algorithms.

  11. Improving the Nomad microscopic walker model

    NARCIS (Netherlands)

    Campanella, M.C.

    2010-01-01

    This paper presents the results of two calibration efforts and improvements of the Nomad microscopic walker model. Each calibration consisted in comparing the outcome of 19 sets of model parameters with results from laboratory experiments. Three different flows were used in the calibrations: bidirec

  12. Efficient Adoption and Assessment of Multiple Process Improvement Reference Models

    Directory of Open Access Journals (Sweden)

    Simona Jeners

    2013-06-01

    Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.

  13. Improvements in continuum modeling for biomolecular systems

    CERN Document Server

    Qiao, Yu

    2015-01-01

    Modeling of biomolecular systems plays an essential role in understanding biological processes, such as ionic flow across channels, protein modification or interaction, and cell signaling. The continuum model described by the Poisson-Boltzmann (PB)/Poisson-Nernst-Planck (PNP) equations has made great contributions towards simulation of these processes. However, the model has shortcomings in its commonly used form and cannot capture (or cannot accurately capture) some important physical properties of biological systems. Considerable efforts have been made to improve the continuum model to account for discrete particle interactions and to make progress in numerical methods to provide accurate and efficient simulation. This review will summarize recent main improvements in continuum modeling for biomolecular systems, with focus on the size-modified models, the coupling of the classical density functional theory and PNP equations, the coupling of polar and nonpolar interactions, and numerical progress.

  14. Breast cancer stories on the internet : improving search facilities to help patients find stories of similar others

    NARCIS (Netherlands)

    Overberg, Regina Ingrid

    2013-01-01

    The primary aim of this thesis is to gain insight into which search facilities for spontaneously published stories facilitate breast cancer patients in finding stories by other patients in a similar situation. According to the narrative approach, social comparison theory, and social cognitive theory

  15. Study for the design method of multi-agent diagnostic system to improve diagnostic performance for similar abnormality

    Energy Technology Data Exchange (ETDEWEB)

    Minowa, Hirotsugu; Gofuku, Akio [Okayama University, Okayama (Japan)

    2014-08-15

    Accidents on industrial plants cause large loss on human, economic, social credibility. In recent, studies of diagnostic methods using techniques of machine learning such as support vector machine is expected to detect the occurrence of abnormality in a plant early and correctly. There were reported that these diagnostic machines has high accuracy to diagnose the operating state of industrial plant under mono abnormality occurrence. But the each diagnostic machine on the multi-agent diagnostic system may misdiagnose similar abnormalities as a same abnormality if abnormalities to diagnose increases. That causes that a single diagnostic machine may show higher diagnostic performance than one of multi-agent diagnostic system because decision-making considering with misdiagnosis is difficult. Therefore, we study the design method for multi-agent diagnostic system to diagnose similar abnormality correctly. This method aimed to realize automatic generation of diagnostic system where the generation process and location of diagnostic machines are optimized to diagnose correctly the similar abnormalities which are evaluated from the similarity of process signals by statistical method. This paper explains our design method and reports the result evaluated our method applied to the process data of the fast-breeder reactor Monju.

  16. Differences in Effects of Zuojin Pills(左金丸)and Its Similar Formulas on Wei Cold Model in Rats

    Institute of Scientific and Technical Information of China (English)

    赵艳玲; 史文丽; 山丽梅; 王伽伯; 赵海平; 肖小河

    2009-01-01

    Objective:To explore the effects of Zuojin Pills(左金丸)and its similar formulas on the stomach cold syndrome in a Wei cold model in rats.Methods:The rat Wei cold model was established by intragastric administration of glacial NaOH,and the gastric mucosa injury indices,together with the levels of motilin and gastrin in the stomach,were determined.The preventive and curative effects of Zuojin Pills and its similar formulas on gastric mucosa injury were investigated.Results:Zuojin Pills and its similar formul...

  17. An application of superpositions of two-state Markovian sources to the modelling of self-similar behaviour

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis

    1997-01-01

    We present a modelling framework and a fitting method for modelling second order self-similar behaviour with the Markovian arrival process (MAP). The fitting method is based on fitting to the autocorrelation function of counts a second order self-similar process. It is shown that with this fitting...... algorithm it is possible closely to match the autocorrelation function of counts for a second order self-similar process over 3-5 time-scales with 8-16 state MAPs with a very simple structure, i.e. a superposition of 3 and 4 interrupted Poisson processes (IPP) respectively and a Poisson process. The fitting...

  18. Genome-Wide Expression Profiling of Five Mouse Models Identifies Similarities and Differences with Human Psoriasis

    NARCIS (Netherlands)

    Swindell, William R.; Johnston, Andrew; Carbajal, Steve; Han, Gangwen; Wohn, Christian; Lu, Jun; Xing, Xianying; Nair, Rajan P.; Voorhees, John J.; Elder, James T.; Wang, Xiao-Jing; Sano, Shigetoshi; Prens, Errol P.; DiGiovanni, John; Pittelkow, Mark R.; Ward, Nicole L.; Gudjonsson, Johann E.

    2011-01-01

    Development of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features of the huma

  19. Genome-wide expression profiling of five mouse models identifies similarities and differences with human psoriasis

    NARCIS (Netherlands)

    W.R. Swindell (William R.); A. Johnston (Andrew); S. Carbajal (Steve); G. Han (Gangwen); C.T. Wohn (Christopher); J. Lu (Jun); X. Xing (Xianying); R.P. Nair (Rajan P.); J.J. Voorhees (John); J.T. Elder (James); X.J. Wang (Xian Jiang); S. Sano (Shigetoshi); E.P. Prens (Errol); J. DiGiovanni (John); M.R. Pittelkow (Mark R.); N.L. Ward (Nicole); J.E. Gudjonsson (Johann Eli)

    2011-01-01

    textabstractDevelopment of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features

  20. Genome-wide expression profiling of five mouse models identifies similarities and differences with human psoriasis

    NARCIS (Netherlands)

    W.R. Swindell (William R.); A. Johnston (Andrew); S. Carbajal (Steve); G. Han (Gangwen); C.T. Wohn (Christopher); J. Lu (Jun); X. Xing (Xianying); R.P. Nair (Rajan P.); J.J. Voorhees (John); J.T. Elder (James); X.J. Wang (Xian Jiang); S. Sano (Shigetoshi); E.P. Prens (Errol); J. DiGiovanni (John); M.R. Pittelkow (Mark R.); N.L. Ward (Nicole); J.E. Gudjonsson (Johann Eli)

    2011-01-01

    textabstractDevelopment of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features

  1. Consequences of team charter quality: Teamwork mental model similarity and team viability in engineering design student teams

    Science.gov (United States)

    Conway Hughston, Veronica

    Since 1996 ABET has mandated that undergraduate engineering degree granting institutions focus on learning outcomes such as professional skills (i.e. solving unstructured problems and working in teams). As a result, engineering curricula were restructured to include team based learning---including team charters. Team charters were diffused into engineering education as one of many instructional activities to meet the ABET accreditation mandates. However, the implementation and execution of team charters into engineering team based classes has been inconsistent and accepted without empirical evidence of the consequences. The purpose of the current study was to investigate team effectiveness, operationalized as team viability, as an outcome of team charter implementation in an undergraduate engineering team based design course. Two research questions were the focus of the study: a) What is the relationship between team charter quality and viability in engineering student teams, and b) What is the relationship among team charter quality, teamwork mental model similarity, and viability in engineering student teams? Thirty-eight intact teams, 23 treatment and 15 comparison, participated in the investigation. Treatment teams attended a team charter lecture, and completed a team charter homework assignment. Each team charter was assessed and assigned a quality score. Comparison teams did not join the lecture, and were not asked to create a team charter. All teams completed each data collection phase: a) similarity rating pretest; b) similarity posttest; and c) team viability survey. Findings indicate that team viability was higher in teams that attended the lecture and completed the charter assignment. Teams with higher quality team charter scores reported higher levels of team viability than teams with lower quality charter scores. Lastly, no evidence was found to support teamwork mental model similarity as a partial mediator of the team charter quality on team viability

  2. Improving the physiological realism of experimental models.

    Science.gov (United States)

    Vinnakota, Kalyan C; Cha, Chae Y; Rorsman, Patrik; Balaban, Robert S; La Gerche, Andre; Wade-Martins, Richard; Beard, Daniel A; Jeneson, Jeroen A L

    2016-04-06

    The Virtual Physiological Human (VPH) project aims to develop integrative, explanatory and predictive computational models (C-Models) as numerical investigational tools to study disease, identify and design effective therapies and provide an in silico platform for drug screening. Ultimately, these models rely on the analysis and integration of experimental data. As such, the success of VPH depends on the availability of physiologically realistic experimental models (E-Models) of human organ function that can be parametrized to test the numerical models. Here, the current state of suitable E-models, ranging from in vitro non-human cell organelles to in vivo human organ systems, is discussed. Specifically, challenges and recent progress in improving the physiological realism of E-models that may benefit the VPH project are highlighted and discussed using examples from the field of research on cardiovascular disease, musculoskeletal disorders, diabetes and Parkinson's disease.

  3. A statistical approach to modelling permafrost distribution in the European Alps or similar mountain ranges

    Directory of Open Access Journals (Sweden)

    L. Boeckli

    2012-01-01

    Full Text Available Estimates of permafrost distribution in mountain regions are important for the assessment of climate change effects on natural and human systems. In order to make permafrost analyses and the establishment of guidelines for e.g. construction or hazard assessment comparable and compatible between regions, one consistent and traceable model for the entire Alpine domain is required. For the calibration of statistical models, the scarcity of suitable and reliable information about the presence or absence of permafrost makes the use of large areas attractive due to the larger data base available.

    We present a strategy and method for modelling permafrost distribution of entire mountain regions and provide the results of statistical analyses and model calibration for the European Alps. Starting from an integrated model framework, two statistical sub-models are developed, one for debris-covered areas (debris model and one for steep bedrock (rock model. They are calibrated using rock glacier inventories and rock surface temperatures. To support the later generalization to surface characteristics other than those available for calibration, so-called offset terms have been introduced into the model that allow doing this in a transparent and traceable manner.

    For the debris model a generalized linear mixed-effect model (GLMM is used to predict the probability of a rock glacier being intact as opposed to relict. It is based on the explanatory variables mean annual air temperature (MAAT, potential incoming solar radiation (PISR and the mean annual sum of precipitation (PRECIP, and achieves an excellent discrimination (area under the receiver-operating characteristic, AUROC = 0.91. Surprisingly, the probability of a rock glacier being intact is positively associated with increasing PRECIP for given MAAT and PISR conditions. The rock model is based on a linear regression and was calibrated with mean annual rock surface temperatures (MARST. The

  4. From the similarities between neutrons and radon to advanced radon-detection and improved cold fusion neutron-measurements

    Science.gov (United States)

    Tommasino, L.; Espinosa, G.

    2014-07-01

    Neutrons and radon are both ubiquitous in the earth's crust. The neutrons of terrestrial origin are strongly related to radon since they originate mainly from the interactions between the alpha particles from the decays of radioactive-gas (namely Radon and Thoron) and the light nuclei. Since the early studies in the field of neutrons, the radon gas was used to produce neutrons by (α, n) reactions in beryllium. Another important similarity between radon and neutrons is that they can be detected only through the radiations produced respectively by decays or by nuclear reactions. These charged particles from the two distinct nuclear processes are often the same (namely alpha-particles). A typical neutron detector is based on a radiator facing a alpha-particle detector, such as in the case of a neutron film badge. Based on the similarity between neutrons and radon, a film badge for radon has been recently proposed. The radon film badge, in addition to be similar, may be even identical to the neutron film badge. For these reasons, neutron measurements can be easily affected by the presence of unpredictable large radon concentration. In several cold fusion experiments, the CR-39 plastic films (typically used in radon and neutron film-badges), have been the detectors of choice for measuring neutrons. In this paper, attempts will be made to prove that most of these neutron-measurements might have been affected by the presence of large radon concentrations.

  5. Quality of life and sleep quality are similarly improved after aquatic or dry-land aerobic training in patients with type 2 diabetes: A randomized clinical trial.

    Science.gov (United States)

    S Delevatti, Rodrigo; Schuch, Felipe Barreto; Kanitz, Ana Carolina; Alberton, Cristine L; Marson, Elisa Corrêa; Lisboa, Salime Chedid; Pinho, Carolina Dertzbocher Feil; Bregagnol, Luciana Peruchena; Becker, Maríndia Teixeira; Kruel, Luiz Fernando M

    2017-09-06

    To compare the effects of two aerobic training models in water and on dry-land on quality of life, depressive symptoms and sleep quality in patients with type 2 diabetes. Randomized clinical trial. Thirty-five patients with type 2 diabetes were randomly assigned to aquatic aerobic training group (n=17) or dry-land aerobic training group (n=18). Exercise training length was of 12 weeks, performed in three weekly sessions (45min/session), with intensity progressing from 85% to 100% of heart rate of anaerobic threshold during interventions. All outcomes were evaluated at baseline and 12 weeks later. In per protocol analysis, physical and psychological domains of quality of life improved in both groups (pquality of life and sleep quality improved in both groups (paquatic environment provides similar effects to aerobic training in a dry-land environment on quality of life, depressive symptoms and sleep quality in patients with type 2 diabetes. Clinical trial reg. no. NCT01956357, clinicaltrials.gov. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  6. On use of the alpha stable self-similar stochastic process to model aggregated VBR video traffic

    Institute of Scientific and Technical Information of China (English)

    Huang Tianyun

    2006-01-01

    The alpha stable self-similar stochastic process has been proved an effective model for high variable data traffic. A deep insight into some special issues and considerations on use of the process to model aggregated VBR video traffic is made. Different methods to estimate stability parameter α and self-similar parameter H are compared. Processes to generate the linear fractional stable noise (LFSN) and the alpha stable random variables are provided. Model construction and the quantitative comparisons with fractional Brown motion (FBM) and real traffic are also examined. Open problems and future directions are also given with thoughtful discussions.

  7. An Improved Valuation Model for Technology Companies

    Directory of Open Access Journals (Sweden)

    Ako Doffou

    2015-06-01

    Full Text Available This paper estimates some of the parameters of the Schwartz and Moon (2001 model using cross-sectional data. Stochastic costs, future financing, capital expenditures and depreciation are taken into account. Some special conditions are also set: the speed of adjustment parameters are equal; the implied half-life of the sales growth process is linked to analyst forecasts; and the risk-adjustment parameter is inferred from the company’s observed stock price beta. The model is illustrated in the valuation of Google, Amazon, eBay, Facebook and Yahoo. The improved model is far superior to the Schwartz and Moon (2001 model.

  8. An improved computational constitutive model for glass

    Science.gov (United States)

    Holmquist, Timothy J.; Johnson, Gordon R.; Gerlach, Charles A.

    2017-01-01

    In 2011, Holmquist and Johnson presented a model for glass subjected to large strains, high strain rates and high pressures. It was later shown that this model produced solutions that were severely mesh dependent, converging to a solution that was much too strong. This article presents an improved model for glass that uses a new approach to represent the interior and surface strength that is significantly less mesh dependent. This new formulation allows for the laboratory data to be accurately represented (including the high tensile strength observed in plate-impact spall experiments) and produces converged solutions that are in good agreement with ballistic data. The model also includes two new features: one that decouples the damage model from the strength model, providing more flexibility in defining the onset of permanent deformation; the other provides for a variable shear modulus that is dependent on the pressure. This article presents a review of the original model, a description of the improved model and a comparison of computed and experimental results for several sets of ballistic data. Of special interest are computed and experimental results for two impacts onto a single target, and the ability to compute the damage velocity in agreement with experiment data. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.

  9. An Improved Model for the Turbulent PBL

    Science.gov (United States)

    Cheng, Y.; Canuto, V. M.; Howard, A. M.; Hansen, James E. (Technical Monitor)

    2001-01-01

    Second order turbulence models of the Mellor and Yamada type have been widely used to simulate the PBL. It is however known that these models have several deficiencies. For example, they all predict a critical Richardson number which is about four times smaller than the Large Eddy Simulation (LES) data, they are unable to match the surface data, and they predict a boundary layer height lower than expected. In the present model, we show that these difficulties are all overcome by a single new physical input: the use of the most complete expression for both the pressure-velocity and the pressure-temperature correlations presently available. Each of the new terms represents a physical process that, was not accounted for by previous models. The new model is presented in three different levels according to Mellor and Yamada's terminology, with new, ready-to-use expressions for the turbulent, moments. We show that the new model reproduces several experimental and LES data better than previous models. As far as the PBL is concerned, we show that the model reproduces both the Kansas data as analyzed by Businger et al. in the context of Monin-Obukhov similarity theory for smaller Richardson numbers, as well as the LES and laboratory data up to Richardson numbers of order unity. We also show that the model yields a higher PBL height than the previous models.

  10. A model for cross-referencing and calculating similarity of metal alloys

    Directory of Open Access Journals (Sweden)

    Svetlana Pocajt

    2013-12-01

    Full Text Available This paper presents an innovative model for the comparison and crossreferencing of metal alloys, in order to determine their interchangeability in engineering, manufacturing and material sourcing. The model uses a large alloy database and statistical approach to estimate missing composition and mechanical properties parameters and to calculate property intervals. A classification of metals and fuzzy logic are then applied to compare metal alloys. The model and its algorithm have been implemented and tested in real-life applications. In this paper, an application of the model in finding unknown equivalent metals by comparing their compositions and mechanical properties in a very large metals database is described, and possibilities for further research and new applications are presented.

  11. Analytic solution of a model of language competition with bilingualism and interlinguistic similarity

    CERN Document Server

    Otero-Espinar, Victoria; Nieto, Juan J; Mira, Jorge

    2013-01-01

    An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also sug...

  12. Modelers and policymakers : improving the relationships.

    Energy Technology Data Exchange (ETDEWEB)

    Karas, Thomas H.

    2004-06-01

    On April 22 and 23, 2004, a diverse group of 14 policymakers, modelers, analysts, and scholars met with some 22 members of the Sandia National Laboratories staff to explores ways in which the relationships between modelers and policymakers in the energy and environment fields (with an emphasis on energy) could be made more productive for both. This report is not a transcription of that workshop, but draws very heavily on its proceedings. It first describes the concept of modeling, the varying ways in which models are used to support policymaking, and the institutional context for those uses. It then proposes that the goal of modelers and policymakers should be a relationship of mutual trust, built on a foundation of communication, supported by the twin pillars of policy relevance and technical credibility. The report suggests 20 guidelines to help modelers improve the relationship, followed by 10 guidelines to help policymakers toward the same goal.

  13. Modeling and analysis of self-similar traffic source based on fractal-binomial-noise-driven Poisson process

    Institute of Scientific and Technical Information of China (English)

    ZHANG Di; ZHANG Min; YE Pei-da

    2006-01-01

    This article explores the short-range dependence (SRD) and the long-range dependence (LRD) of self-similar traffic generated by the fractal-binomial-noise-driven Poisson process (FBNDP) model and lays emphasis on the former. By simulation, the SRD decaying trends with the increase of Hurst value and peak rate are obtained, respectively. After a comprehensive analysis of accuracy of self-similarity intensity,the optimal range of peak rate is determined by taking into account the time cost, the accuracy of self-similarity intensity,and the effect of SRD.

  14. CREATING PRODUCT MODELS FROM POINT CLOUD OF CIVIL STRUCTURES BASED ON GEOMETRIC SIMILARITY

    Directory of Open Access Journals (Sweden)

    N. Hidaka

    2015-05-01

    Full Text Available The existing civil structures must be maintained in order to ensure their expected lifelong serviceability. Careful rehabilitation and maintenance planning plays a significant role in that effort. Recently, construction information modelling (CIM techniques, such as product models, are increasingly being used to facilitate structure maintenance. Using this methodology, laser scanning systems can provide point cloud data that are used to produce highly accurate and dense representations of civil structures. However, while numerous methods for creating a single surface exist, part decomposition is required in order to create product models consisting of more than one part. This research aims at the development of a surface reconstruction system that utilizes point cloud data efficiently in order to create complete product models. The research proposes using the application of local shape matching to the input point clouds in order to define a set of representative parts. These representative parts are then polygonized and copied to locations where the same types of parts exist. The results of our experiments show that the proposed method can efficiently create product models using input point cloud data.

  15. Annealed Ising model with site dilution on self-similar structures

    Science.gov (United States)

    Silva, V. S. T.; Andrade, R. F. S.; Salinas, S. R.

    2014-11-01

    We consider an Ising model on the triangular Apollonian network (AN), with a thermalized distribution of vacant sites. The statistical problem is formulated in a grand canonical ensemble, in terms of the temperature T and a chemical potential μ associated with the concentration of active magnetic sites. We use a well-known transfer-matrix method, with a number of adaptations, to write recursion relations between successive generations of this hierarchical structure. We also investigate the analogous model on the diamond hierarchical lattice (DHL). From the numerical analysis of the recursion relations, we obtain various thermodynamic quantities. In the μ →∞ limit, we reproduce the results for the uniform models: in the AN, the system is magnetically ordered at all temperatures, while in the DHL there is a ferromagnetic-paramagnetic transition at a finite value of T . Magnetic ordering, however, is shown to disappear for sufficiently large negative values of the chemical potential.

  16. Exact Solutions for Stokes' Flow of a Non-Newtonian Nanofluid Model: A Lie Similarity Approach

    Science.gov (United States)

    Aziz, Taha; Aziz, A.; Khalique, C. M.

    2016-07-01

    The fully developed time-dependent flow of an incompressible, thermodynamically compatible non-Newtonian third-grade nanofluid is investigated. The classical Stokes model is considered in which the flow is generated due to the motion of the plate in its own plane with an impulsive velocity. The Lie symmetry approach is utilised to convert the governing nonlinear partial differential equation into different linear and nonlinear ordinary differential equations. The reduced ordinary differential equations are then solved by using the compatibility and generalised group method. Exact solutions for the model equation are deduced in the form of closed-form exponential functions which are not available in the literature before. In addition, we also derived the conservation laws associated with the governing model. Finally, the physical features of the pertinent parameters are discussed in detail through several graphs.

  17. Theoretical Model for the Formation of Caveolae and Similar Membrane Invaginations

    Science.gov (United States)

    Sens, Pierre; Turner, Matthew S.

    2004-01-01

    We study a physical model for the formation of bud-like invaginations on fluid lipid membranes under tension, and apply this model to caveolae formation. We demonstrate that budding can be driven by membrane-bound proteins, provided that they exert asymmetric forces on the membrane that give rise to bending moments. In particular, caveolae formation does not necessarily require forces to be applied by the cytoskeleton. Our theoretical model is able to explain several features observed experimentally in caveolae, where proteins in the caveolin family are known to play a crucial role in the formation of caveolae buds. These include 1), the formation of caveolae buds with sizes in the 100-nm range and 2), that certain N- and C-termini deletion mutants result in vesicles that are an order-of-magnitude larger. Finally, we discuss the possible origin of the morphological striations that are observed on the surfaces of the caveolae. PMID:15041647

  18. Self-similar transformations of lattice-Ising models at critical temperatures

    CERN Document Server

    Feng, You-gang

    2012-01-01

    We classify geometric blocks that serve as spin carriers into simple blocks and compound blocks by their topologic connectivity, define their fractal dimensions and describe the relevant transformations. By the hierarchical property of transformations and a block-spin scaling law we obtain a relation between the block spin and its carrier's fractal dimension. By mapping we set up a block-spin Gaussian model and get a formula connecting the critical point and the minimal fractal dimension of the carrier, which guarantees the uniqueness of a fixed point corresponding to the critical point, changing the complicated calculation of critical point into the simple one of the minimal fractal dimension. The numerical results of critical points with high accuracy for five conventional lattice-Ising models prove our method very effective and may be suitable to all lattice-Ising models. The origin of fluctuations in structure at critical temperature is discussed. Our method not only explains the problems met in the renor...

  19. Improved transition models for cepstral trajectories

    CSIR Research Space (South Africa)

    Badenhorst, J

    2012-11-01

    Full Text Available We improve on a piece-wise linear model of the trajectories of Mel Frequency Cepstral Coefficients, which are commonly used as features in Automatic Speech Recognition. For this purpose, we have created a very clean single-speaker corpus, which...

  20. School Improvement Model to Foster Student Learning

    Science.gov (United States)

    Rulloda, Rudolfo Barcena

    2011-01-01

    Many classroom teachers are still using the traditional teaching methods. The traditional teaching methods are one-way learning process, where teachers would introduce subject contents such as language arts, English, mathematics, science, and reading separately. However, the school improvement model takes into account that all students have…

  1. Improving Representational Competence with Concrete Models

    Science.gov (United States)

    Stieff, Mike; Scopelitis, Stephanie; Lira, Matthew E.; DeSutter, Dane

    2016-01-01

    Representational competence is a primary contributor to student learning in science, technology, engineering, and math (STEM) disciplines and an optimal target for instruction at all educational levels. We describe the design and implementation of a learning activity that uses concrete models to improve students' representational competence and…

  2. Mechanisms of solvolyses of acid chlorides and chloroformates. Chloroacetyl and phenylacetyl chloride as similarity models.

    Science.gov (United States)

    Bentley, T William; Harris, H Carl; Ryu, Zoon Ha; Lim, Gui Taek; Sung, Dae Dong; Szajda, Stanley R

    2005-10-28

    [reaction: see text] Rate constants and product selectivities (S = ([ester product]/[acid product]) x ([water]/[alcohol solvent]) are reported for solvolyses of chloroacetyl chloride (3) at -10 degrees C and phenylacetyl chloride (4) at 0 degrees C in ethanol/ and methanol/water mixtures. Additional kinetic data are reported for solvolyses in acetone/water, 2,2,2-trifluoroethanol(TFE)/water, and TFE/ethanol mixtures. Selectivities and solvent effects for 3, including the kinetic solvent isotope effect (KSIE) of 2.18 for methanol, are similar to those for solvolyses of p-nitrobenzoyl chloride (1, Z = NO(2)); rate constants in acetone/water are consistent with a third-order mechanism, and rates and products in ethanol/ and methanol/water mixtures can be explained quantitatively by competing third-order mechanisms in which one molecule of solvent (alcohol or water) acts as a nucleophile and another acts as a general base (an addition/elimination reaction channel). Selectivities increase for 3 as water is added to alcohol. Solvent effects on rate constants for solvolyses of 3 are very similar to those of methyl chloroformate, but acetyl chloride shows a lower KSIE, and a higher sensitivity to solvent-ionizing power, explained by a change to an S(N)2/S(N)1 (ionization) reaction channel. Solvolyses of 4 undergo a change from the addition/elimination channel in ethanol to the ionization channel in aqueous ethanol (<80% v/v alcohol). The reasons for change in reaction channels are discussed in terms of the gas-phase stabilities of acylium ions, calculated using Gaussian 03 (HF/6-31G(d), B3LYP/6-31G(d), and B3LYP/6-311G(d,p) MO theory).

  3. Web Similarity

    NARCIS (Netherlands)

    Cohen, A.R.; Vitányi, P.M.B.

    2015-01-01

    Normalized web distance (NWD) is a similarity or normalized semantic distance based on the World Wide Web or any other large electronic database, for instance Wikipedia, and a search engine that returns reliable aggregate page counts. For sets of search terms the NWD gives a similarity on a scale fr

  4. Merging tree algorithm of growing voids in self-similar and CDM models

    NARCIS (Netherlands)

    Russell, Esra

    2013-01-01

    Observational studies show that voids are prominent features of the large-scale structure of the present-day Universe. Even though their emerging from the primordial density perturbations and evolutionary patterns differ from dark matter haloes, N-body simulations and theoretical models have shown t

  5. Improvements in accuracy of dense OPC models

    Science.gov (United States)

    Kallingal, Chidam; Oberschmidt, James; Viswanathan, Ramya; Abdo, Amr; Park, OSeo

    2008-10-01

    Performing model-based optical proximity correction (MBOPC) on layouts has become an integral part of patterning advanced integrated circuits. Earlier technologies used sparse OPC, the run times of which explode when the density of layouts increases. With the move to 45 nm technology node, this increase in run time has resulted in a shift to dense simulation OPC, which is pixel-based. The dense approach becomes more efficient at 45nm technology node and beyond. New OPC model forms can be used with the dense simulation OPC engine, providing the greater accuracy required by smaller technology nodes. Parameters in the optical model have to be optimized to achieve the required accuracy. Dense OPC uses a resist model with a different set of parameters than sparse OPC. The default search ranges used in the optimization of these resist parameters do not always result in the best accuracy. However, it is possible to improve the accuracy of the resist models by understanding the restrictions placed on the search ranges of the physical parameters during optimization. This paper will present results showing the correlation between accuracy of the models and some of these optical and resist parameters. The results will show that better optimization can improve the model fitness of features in both the calibration and verification set.

  6. HEMETβ: improvement of hepatocyte metabolism mathematical model.

    Science.gov (United States)

    Orsi, G; De Maria, C; Guzzardi, M; Vozzi, F; Vozzi, G

    2011-10-01

    This article describes hepatocyte metabolism mathematical model (HEMETβ), which is an improved version of HEMET, an effective and versatile virtual cell model based on hepatic cell metabolism. HEMET is based on a set of non-linear differential equations, implemented in Simulink®, which describes the biochemical reactions and energetic cell state, and completely mimics the principal metabolic pathways in hepatic cells. The cell energy function and modular structure are the core of this model. HEMETβ as HEMET model describes hepatic cellular metabolism in standard conditions (cell culture in a plastic multi-well placed in an incubator at 37° C with 5% of CO2) and with excess substrates concentration. The main improvements in HEMETβ are the introductions of Michaelis-Menten models for reversible reactions and enzymatic inhibition. In addition, we eliminated hard non-linearities and modelled cell proliferation and every single aminoacid degradation pathway. All these innovations, combined with a user-friendly aspect, allow researchers to create new cell types and validate new experimental protocols just varying 'peripheral' pathways or model inputs.

  7. Engineering practice of reducing ground subsidence by grouting into overburden bed-separated and similar model experiment

    Institute of Scientific and Technical Information of China (English)

    GAO Yan-fa; ZHONG Ya-ping; LI Jian-min; WANG Su-hua; ZHANG Qing-song

    2007-01-01

    The subsidence prediction theory under the condition of grouting into bedseparated was developed. Reducing ground subsidence by grouting was carried out on eight fully-mechanized top-coal caving faces, by using the continuous grouting in multiple-layer to obtain experiment results of reducing subsidence under fully mining. The similar material model that can be dismantled under the condition of constant temperature and constant humidity was developed. The model was used to simulate the evolution of overburden bed-separated under such constraints of temperature and humidity, at the same time, and to test the hardening process of similar materials.

  8. A New Approach to Satisfy Dynamic Similarity for Model Submarine Maneuvers

    Science.gov (United States)

    2007-11-28

    scale jam recovery, a steady approach speed is required for RCM correlation maneuvers. The scaled model approach speed must be set at 21 U.n0 . UsO (33...NAVSEA 05H Y M. King 1 ONR 331 Y R. Joslin 1 NSWCCD 3452 Y TIC (C) 1 NSWCCD 5060 Y D. Walden 1 NSWCCD 5080 Y J. Brown 1 NSWCCD 5080 Y B. Cox 1 NSWCCD 5400 Y

  9. Study on Text Similarity Based on LDA Model%基于LDA模型的文本相似度研究

    Institute of Scientific and Technical Information of China (English)

    陈攀; 杨浩; 吕品; 王海晖

    2016-01-01

    LDA主题模型是近年来提出的一种具有文本表示能力的非监督学习模型。考虑到传统主题模型在处理大规模文本时存在的局限性,文中提出一种基于LDA模型的文本相似度计算方法。利用LDA为语料库建模,通过Gibbs抽样间接估算模型参数,将文本表示为固定隐含主题集上的概率分布,以此计算文本之间的相似度。最后将K-means算法作为文本相似度的评估指标。实验结果表明,与LSI模型相比,该方法能有效地提高文本相似度计算的准确性和文本聚类效果。%LDA topic model is an unsupervised model which exhibits superiority on latent topic modeling of text data in the research of re-cent years. Considering the disadvantage of the traditional topic model when dealing with the large-scale text corpuses,a method which improves text similarity computations by using LDA model is proposed. It models corpus with LDA,parameters are estimated with Gibbs sampling. Each document is represented for the probability distribution of fixed implied theme set and computed the similarity between the texts. Finally,the K-means algorithm is selected as the evaluation index of text similarity. Experimental results show this method can im-prove the accuracy of text similarity and clustering quality of text effectively compared with LSI model.

  10. Modeling Land Use Change In A Tropical Environment Using Similar Hydrologic Response Units

    Science.gov (United States)

    Guardiola-Claramonte, M.; Troch, P.

    2006-12-01

    Montane mainland South East Asia comprises areas of great biological and cultural diversity. Over the last decades the region has overcome an important conversion from traditional agriculture to cash crop agriculture driven by regional and global markets. Our study aims at understanding the hydrological implications of these land use changes at the catchment scale. In 2004, networks of hydro-meteorological stations observing water and energy fluxes were installed in two 70 km2 catchments in Northern Thailand (Chiang Mai Province) and Southern China (Yunnan Province). In addition, a detailed soil surveying campaign was done at the moment of instrument installation. Land use is monitored periodically using satellite data. The Thai catchment is switching from small agricultural fields to large extensions of cash crops. The Chinese catchment is replacing the traditional forest for rubber plantations. A first comparative study based on catchments' geomorphologic characteristics, field observations and rainfall-runoff response revealed the dominant hydrologic processes in the catchments. Land use information is then translated into three different Hydrologic Response Units (HRU): rice paddies, pervious and impervious surfaces. The pervious HRU include different land uses such as different stages of forest development, rubber plantations, and agricultural fields; the impervious ones are urban areas, roads and outcrops. For each HRU a water and energy balance model is developed incorporating field observed hydrologic processes, measured field parameters, and literature-based vegetation and soil parameters to better describe the root zone, surface and subsurface flow characteristics without the need of further calibration. The HRU water and energy balance models are applied to single hillslopes and their integrated hydrologic response are compared for different land covers. Finally, the response of individual hillslopes is routed through the channel network to represent

  11. Self-similar semi-analytical RMHD jet model: first steps towards a more comprehensive jet modelling for data fitting

    Science.gov (United States)

    Markoff, Sera; Ceccobello, Chiara; Heemskerk, Martin; Cavecchi, Yuri; Polko, Peter; Meier, David

    2017-08-01

    Jets are ubiquitous and reveal themselves at different scales and redshifts, showing an extreme diversity in energetics, shapes and emission. Indeed jets are found to be characteristic features of black hole systems, such as X-ray binaries (XRBs) and active galactic nuclei (AGN), as well as of young stellar objects (YSOs) and gamma-ray bursts (GRBs). Observations suggest that jets are an energetically important component of the system that hosts them, because the jet power appears to be comparable to the accretion power. Significant evidence has been found of the impact of jets not only in the immediate proximity of the central object, but as well on their surrounding environment, where they deposit the energy extracted from the accretion flow. Moreover, the inflow/outflow system produces radiation over the entire electromagnetic spectrum, from radio to X-rays. Therefore it is a compelling problem to be solved and deeply understood. I present a new integration scheme to solve radial self-similar, stationary, axisymmetric relativistic magneto-hydro-dynamics (MHD) equations describing collimated, relativistic outflows crossing smoothly all the singular points (the Alfvén point and the modified slow/fast points). For the first time, the integration can be performed all the way from the disk mid-plane to downstream of the modified fast point. I will discuss an ensemble of jet solutions showing diverse jet dynamics (jet Lorentz factor ~ 1-10) and geometric properties (i.e. shock height ~ 103 - 107 gravitational radii), which makes our model suitable for application to many different systems where a relativistic jet is launched.

  12. Improvements in continuum modeling for biomolecular systems

    Science.gov (United States)

    Yu, Qiao; Ben-Zhuo, Lu

    2016-01-01

    Modeling of biomolecular systems plays an essential role in understanding biological processes, such as ionic flow across channels, protein modification or interaction, and cell signaling. The continuum model described by the Poisson- Boltzmann (PB)/Poisson-Nernst-Planck (PNP) equations has made great contributions towards simulation of these processes. However, the model has shortcomings in its commonly used form and cannot capture (or cannot accurately capture) some important physical properties of the biological systems. Considerable efforts have been made to improve the continuum model to account for discrete particle interactions and to make progress in numerical methods to provide accurate and efficient simulations. This review will summarize recent main improvements in continuum modeling for biomolecular systems, with focus on the size-modified models, the coupling of the classical density functional theory and the PNP equations, the coupling of polar and nonpolar interactions, and numerical progress. Project supported by the National Natural Science Foundation of China (Grant No. 91230106) and the Chinese Academy of Sciences Program for Cross & Cooperative Team of the Science & Technology Innovation.

  13. Establishment of the Comprehensive Shape Similarity Model for Complex Polygon Entity by Using Bending Mutilevel Chord Complex Function

    Directory of Open Access Journals (Sweden)

    CHEN Zhanlong

    2016-02-01

    Full Text Available A method about shape similarity measurement of complex holed objects is proposed in this paper. The method extracts features including centroid distance, multilevel chord length, bending degree and concavity-convexity of a geometric object, to construct complex functions based on multilevel bending degree and radius. The complex functions are capable of describing geometry shape from entirety to part. The similarity between geometric objects can be measured by the shape descriptor which is based on the fast Fourier transform of the complex functions. Meanwhile, the matching degree of each scene of complex holed polygons can be got by scene completeness and shape similarity model. And using the feature of multi-level can accomplish the shape similarity measurement among complex geometric objects. Experimenting on geometric objects of different space complexity, the results match human's perceive and show that this method is simple with precision.

  14. Self-similar voiding solutions of a single layered model of folding rocks

    CERN Document Server

    Dodwell, Timothy; Budd, Christopher; Hunt, Giles

    2011-01-01

    In this paper we derive an obstacle problem with a free boundary to describe the formation of voids at areas of intense geological folding. An elastic layer is forced by overburden pressure against a V-shaped rigid obstacle. Energy minimization leads to representation as a nonlinear fourth-order ordinary differential equation, for which we prove their exists a unique solution. Drawing parallels with the Kuhn-Tucker theory, virtual work, and ideas of duality, we highlight the physical significance of this differential equation. Finally we show this equation scales to a single parametric group, revealing a scaling law connecting the size of the void with the pressure/stiffness ratio. This paper is seen as the first step towards a full multilayered model with the possibility of voiding.

  15. Analysis and improvement of Brinkman lattice Boltzmann schemes: bulk, boundary, interface. Similarity and distinctness with finite elements in heterogeneous porous media.

    Science.gov (United States)

    Ginzburg, Irina; Silva, Goncalo; Talon, Laurent

    2015-02-01

    This work focuses on the numerical solution of the Stokes-Brinkman equation for a voxel-type porous-media grid, resolved by one to eight spacings per permeability contrast of 1 to 10 orders in magnitude. It is first analytically demonstrated that the lattice Boltzmann method (LBM) and the linear-finite-element method (FEM) both suffer from the viscosity correction induced by the linear variation of the resistance with the velocity. This numerical artefact may lead to an apparent negative viscosity in low-permeable blocks, inducing spurious velocity oscillations. The two-relaxation-times (TRT) LBM may control this effect thanks to free-tunable two-rates combination Λ. Moreover, the Brinkman-force-based BF-TRT schemes may maintain the nondimensional Darcy group and produce viscosity-independent permeability provided that the spatial distribution of Λ is fixed independently of the kinematic viscosity. Such a property is lost not only in the BF-BGK scheme but also by "partial bounce-back" TRT gray models, as shown in this work. Further, we propose a consistent and improved IBF-TRT model which vanishes viscosity correction via simple specific adjusting of the viscous-mode relaxation rate to local permeability value. This prevents the model from velocity fluctuations and, in parallel, improves for effective permeability measurements, from porous channel to multidimensions. The framework of our exact analysis employs a symbolic approach developed for both LBM and FEM in single and stratified, unconfined, and bounded channels. It shows that even with similar bulk discretization, BF, IBF, and FEM may manifest quite different velocity profiles on the coarse grids due to their intrinsic contrasts in the setting of interface continuity and no-slip conditions. While FEM enforces them on the grid vertexes, the LBM prescribes them implicitly. We derive effective LBM continuity conditions and show that the heterogeneous viscosity correction impacts them, a property also shared

  16. Improving Localization Accuracy: Successive Measurements Error Modeling

    Directory of Open Access Journals (Sweden)

    Najah Abu Ali

    2015-07-01

    Full Text Available Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a -order Gauss–Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter.

  17. A improved Network Security Situation Awareness Model

    Directory of Open Access Journals (Sweden)

    Li Fangwei

    2015-08-01

    Full Text Available In order to reflect the situation of network security assessment performance fully and accurately, a new network security situation awareness model based on information fusion was proposed. Network security situation is the result of fusion three aspects evaluation. In terms of attack, to improve the accuracy of evaluation, a situation assessment method of DDoS attack based on the information of data packet was proposed. In terms of vulnerability, a improved Common Vulnerability Scoring System (CVSS was raised and maked the assessment more comprehensive. In terms of node weights, the method of calculating the combined weights and optimizing the result by Sequence Quadratic Program (SQP algorithm which reduced the uncertainty of fusion was raised. To verify the validity and necessity of the method, a testing platform was built and used to test through evaluating 2000 DAPRA data sets. Experiments show that the method can improve the accuracy of evaluation results.

  18. Improved world-based language model

    Institute of Scientific and Technical Information of China (English)

    CHEN Yong(陈勇); CHAN Kwok-ping

    2004-01-01

    In order to construct a good language model used in the postprocessing phase of a recognition system.A smoothing technique must be used to solve the data sparseness problem. In the past, many smoothing techniques have been proposed. Among them, Katz' s smoothing technique is well known. However, we found that a weakness with the Katz' s smoothing technique. We improved this approach by incorporating one kind of special Chinese language information and Chinese word class information into the language model. We tested the new smoothing technique with a Chinese character recognition system. The experimental result showed that a better performance can be achieved.

  19. SIMILARITIES BETWEEN THE KNOWLEDGE CREATION AND CONVERSION MODEL AND THE COMPETING VALUES FRAMEWORK: AN INTEGRATIVE APPROACH

    Directory of Open Access Journals (Sweden)

    PAULO COSTA

    2016-12-01

    Full Text Available ABSTRACT Contemporaneously, and with the successive paradigmatic revolutions inherent to management since the XVII century, we are witnessing a new era marked by the structural rupture in the way organizations are perceived. Market globalization, cemented by quick technological evolutions, associated with economic, cultural, political and social transformations characterize a reality where uncertainty is the only certainty for organizations and managers. Knowledge management has been interpreted by managers and academics as a viable alternative in a logic of creation and conversation of sustainable competitive advantages. However, there are several barriers to the implementation and development of knowledge management programs in organizations, with organizational culture being one of the most preponderant. In this sense, and in this article, we will analyze and compare The Knowledge Creation and Conversion Model proposed by Nonaka and Takeuchi (1995 and Quinn and Rohrbaugh's Competing Values Framework (1983, since both have convergent conceptual lines that can assist managers in different sectors to guide their organization in a perspective of productivity, quality and market competitiveness.

  20. Improving data transfer for model coupling

    Science.gov (United States)

    Zhang, C.; Liu, L.; Yang, G.; Li, R.; Wang, B.

    2015-10-01

    Data transfer, which means transferring data fields between two component models or rearranging data fields among processes of the same component model, is a fundamental operation of a coupler. Most of state-of-the-art coupler versions currently use an implementation based on the point-to-point (P2P) communication of the Message Passing Interface (MPI) (call such an implementation "P2P implementation" for short). In this paper, we reveal the drawbacks of the P2P implementation, including low communication bandwidth due to small message size, variable and big number of MPI messages, and jams during communication. To overcome these drawbacks, we propose a butterfly implementation for data transfer. Although the butterfly implementation can outperform the P2P implementation in many cases, it degrades the performance in some cases because the total message size transferred by the butterfly implementation is larger than that by the P2P implementation. To make the data transfer completely improved, we design and implement an adaptive data transfer library that combines the advantages of both butterfly implementation and P2P implementation. Performance evaluation shows that the adaptive data transfer library significantly improves the performance of data transfer in most cases and does not decrease the performance in any cases. Now the adaptive data transfer library is open to the public and has been imported into a coupler version C-Coupler1 for performance improvement of data transfer. We believe that it can also improve other coupler versions.

  1. The Influence of Matrix Size on Statistical Properties of Co-Occurrence and Limiting Similarity Null Models.

    Science.gov (United States)

    Lavender, Thomas Michael; Schamp, Brandon S; Lamb, Eric G

    2016-01-01

    Null models exploring species co-occurrence and trait-based limiting similarity are increasingly used to explore the influence of competition on community assembly; however, assessments of common models have not thoroughly explored the influence of variation in matrix size on error rates, in spite of the fact that studies have explored community matrices that vary considerably in size. To determine how smaller matrices, which are of greatest concern, perform statistically, we generated biologically realistic presence-absence matrices ranging in size from 3-50 species and sites, as well as associated trait matrices. We examined co-occurrence tests using the C-Score statistic and independent swap algorithm. For trait-based limiting similarity null models, we used the mean nearest neighbour trait distance (NN) and the standard deviation of nearest neighbour distances (SDNN) as test statistics, and considered two common randomization algorithms: abundance independent trait shuffling (AITS), and abundance weighted trait shuffling (AWTS). Matrices as small as three × three resulted in acceptable type I error rates (p ) was associated with increased type I error rates, particularly for matrices with fewer than eight species. Type I error rates increased for limiting similarity tests using the AWTS randomization scheme when community matrices contained more than 35 sites; a similar randomization used in null models of phylogenetic dispersion has previously been viewed as robust. Notwithstanding other potential deficiencies related to the use of small matrices to represent communities, the application of both classes of null model should be restricted to matrices with 10 or more species to avoid the possibility of type II errors. Additionally, researchers should restrict the use of the AWTS randomization to matrices with fewer than 35 sites to avoid type I errors when testing for trait-based limiting similarity. The AITS randomization scheme performed better in terms of

  2. The Influence of Matrix Size on Statistical Properties of Co-Occurrence and Limiting Similarity Null Models.

    Directory of Open Access Journals (Sweden)

    Thomas Michael Lavender

    Full Text Available Null models exploring species co-occurrence and trait-based limiting similarity are increasingly used to explore the influence of competition on community assembly; however, assessments of common models have not thoroughly explored the influence of variation in matrix size on error rates, in spite of the fact that studies have explored community matrices that vary considerably in size. To determine how smaller matrices, which are of greatest concern, perform statistically, we generated biologically realistic presence-absence matrices ranging in size from 3-50 species and sites, as well as associated trait matrices. We examined co-occurrence tests using the C-Score statistic and independent swap algorithm. For trait-based limiting similarity null models, we used the mean nearest neighbour trait distance (NN and the standard deviation of nearest neighbour distances (SDNN as test statistics, and considered two common randomization algorithms: abundance independent trait shuffling (AITS, and abundance weighted trait shuffling (AWTS. Matrices as small as three × three resulted in acceptable type I error rates (p was associated with increased type I error rates, particularly for matrices with fewer than eight species. Type I error rates increased for limiting similarity tests using the AWTS randomization scheme when community matrices contained more than 35 sites; a similar randomization used in null models of phylogenetic dispersion has previously been viewed as robust. Notwithstanding other potential deficiencies related to the use of small matrices to represent communities, the application of both classes of null model should be restricted to matrices with 10 or more species to avoid the possibility of type II errors. Additionally, researchers should restrict the use of the AWTS randomization to matrices with fewer than 35 sites to avoid type I errors when testing for trait-based limiting similarity. The AITS randomization scheme performed better

  3. Two Echelon Supply Chain Integrated Inventory Model for Similar Products: A Case Study

    Science.gov (United States)

    Parjane, Manoj Baburao; Dabade, Balaji Marutirao; Gulve, Milind Bhaskar

    2017-06-01

    The purpose of this paper is to develop a mathematical model towards minimization of total cost across echelons in a multi-product supply chain environment. The scenario under consideration is a two-echelon supply chain system with one manufacturer, one retailer and M products. The retailer faces independent Poisson demand for each product. The retailer and the manufacturer are closely coupled in the sense that the information about any depletion in the inventory of a product at a retailer's end is immediately available to the manufacturer. Further, stock-out is backordered at the retailer's end. Thus the costs incurred at the retailer's end are the holding costs and the backorder costs. The manufacturer has only one processor which is time shared among the M products. Production changeover from one product to another entails a fixed setup cost and a fixed set up time. Each unit of a product has a production time. Considering the cost components, and assuming transportation time and cost to be negligible, the objective of the study is to minimize the expected total cost considering both the manufacturer and retailer. In the process two aspects are to be defined. Firstly, every time a product is taken up for production, how much of it (production batch size, q) should be produced. Considering a large value of q favors the manufacturer while a small value of q suits the retailers. Secondly, for a given batch size q, at what level of retailer's inventory (production queuing point), the batch size S of a product be taken up for production by the manufacturer. A higher value of S incurs more holding cost whereas a lower value of S increases the chance of backorder. A tradeoff between the holding and backorder cost must be taken into consideration while choosing an optimal value of S. It may be noted that due to multiple products and single processor, a product `taken' up for production may not get the processor immediately, and may have to wait in a queue. The `S

  4. Two Echelon Supply Chain Integrated Inventory Model for Similar Products: A Case Study

    Science.gov (United States)

    Parjane, Manoj Baburao; Dabade, Balaji Marutirao; Gulve, Milind Bhaskar

    2016-03-01

    The purpose of this paper is to develop a mathematical model towards minimization of total cost across echelons in a multi-product supply chain environment. The scenario under consideration is a two-echelon supply chain system with one manufacturer, one retailer and M products. The retailer faces independent Poisson demand for each product. The retailer and the manufacturer are closely coupled in the sense that the information about any depletion in the inventory of a product at a retailer's end is immediately available to the manufacturer. Further, stock-out is backordered at the retailer's end. Thus the costs incurred at the retailer's end are the holding costs and the backorder costs. The manufacturer has only one processor which is time shared among the M products. Production changeover from one product to another entails a fixed setup cost and a fixed set up time. Each unit of a product has a production time. Considering the cost components, and assuming transportation time and cost to be negligible, the objective of the study is to minimize the expected total cost considering both the manufacturer and retailer. In the process two aspects are to be defined. Firstly, every time a product is taken up for production, how much of it (production batch size, q) should be produced. Considering a large value of q favors the manufacturer while a small value of q suits the retailers. Secondly, for a given batch size q, at what level of retailer's inventory (production queuing point), the batch size S of a product be taken up for production by the manufacturer. A higher value of S incurs more holding cost whereas a lower value of S increases the chance of backorder. A tradeoff between the holding and backorder cost must be taken into consideration while choosing an optimal value of S. It may be noted that due to multiple products and single processor, a product `taken' up for production may not get the processor immediately, and may have to wait in a queue. The `S

  5. Using data assimilation for systematic model improvement

    Science.gov (United States)

    Lang, Matthew S.; van Leeuwen, Peter Jan; Browne, Phil

    2016-04-01

    In Numerical Weather Prediction parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known, and the parameterisations themselves are approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation, such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential data assimilation methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to predetermined functional forms of missing physics or parameterisations, that are based upon prior information. The method picks out the functional form, or that combination of functional forms, that bests fits the error structure. The prior information typically takes the form of expert knowledge. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. It is also demonstrated that state augmentation is not successful. The results indicate that this new method is a powerful tool in systematic model improvement.

  6. An improved damaging model for structured clays

    Institute of Scientific and Technical Information of China (English)

    姜岩; 雷华阳; 郑刚; 徐舜华

    2008-01-01

    An improved damaging model formulated within the framework of bounding surface for structured clays was proposed. The model was intended to describe the effects of structure degradation due to geotechnical loading. The predictive capability of the model was compared with those of triaxial compression test on Tianjin soft clays. The results show that, by incorporating a new damage function into the model, the reduction of elastic bulk and shear modulus with elastic deformations and the reduction of plastic bulk modulus and shear modulus with plastic deformations are able to be appreciable. Before the axial strain reaches 15%, the axial strain computed from the model is smaller than that from the test under the drained condition. Under the undrained condition, after the axial strain reaches 1%, the axial strain increases quickly because of the complete loss of structure and stiffness; and the result computed from the model is nearly equal to that from the model without the incorporation of the damage function due to less plastic strain under undrained condition test.

  7. Similar Single-Difference Model and Its Algorithm for Solving GPS Monitoring Deformation Directly at Single Epoch

    Institute of Scientific and Technical Information of China (English)

    YU Xuexiang; XU Shaoquan; GAO Wei; LU Weicai

    2003-01-01

    A new similar single-difference mathematical model (SS-DM) and its corresponding algorithmare advanced to solve the deformationof monitoring point directly in singleepoch. The method for building theSSDM is introduced in detail, and themain error sources affecting the accu-racy of deformation measurement areanalyzed briefly, and the basic algo-rithm and steps of solving the deform-ation are discussed.In order to validate the correctnessand the accuracy of the similar single-difference model, the test with fivedual frequency receivers is carried outon a slideway which moved in plane inFeb. 2001. In the test,five sessions areobserved. The numerical results oftest data show that the advanced mod-el is correct.

  8. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  9. An Improved Functional Hierarchy Frame Model for System Maintainability

    Institute of Scientific and Technical Information of China (English)

    CHEN Dai-Lin; CHEN Dong-lin; WANG Ru-gen; ZHU Xue-ping

    2003-01-01

    By means of analogy, this paper analyses the present functional hierarchy frame model for system maintainability, and presents an improved model. Practical application indicates that the improved model is visualized, more convenient and perfected over the pervious models.

  10. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    Science.gov (United States)

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents

  11. Scaling and interaction of self-similar modes in models of high-Reynolds number wall turbulence

    CERN Document Server

    Sharma, A S; McKeon, B J

    2016-01-01

    Previous work has established the usefulness of the resolvent operator that maps the terms nonlinear in the turbulent fluctuations to the fluctuations themselves. Further work has described the self-similarity of the resolvent arising from that of the mean velocity profile. The orthogonal modes provided by the resolvent analysis describe the wall-normal coherence of the motions and inherit that self-similarity. In this contribution, we present the implications of this similarity for the nonlinear interaction between modes with different scales and wall-normal locations. By considering the nonlinear interactions between modes, it is shown that much of the turbulence scaling behaviour in the logarithmic region can be determined from a single arbitrarily chosen reference plane. Thus, the geometric scaling of the modes is impressed upon the nonlinear interaction between modes. Implications of these observations on the self-sustaining mechanisms of wall turbulence, modelling and simulation are outlined.

  12. OPC model sampling evaluation and weakpoint "in-situ" improvement

    Science.gov (United States)

    Fu, Nan; Elshafie, Shady; Ning, Guoxiang; Roling, Stefan

    2016-10-01

    One of the major challenges of optical proximity correction (OPC) models is to maximize the coverage of real design features using sampling pattern. Normally, OPC model building is based on 1-D and 2-D test patterns with systematically changing pitches alignment with design rules. However, those features with different optical and geometric properties will generate weak-points where OPC simulation cannot precisely predict resist contours on wafer due to the nature of infinite IC designs and limited number of model test patterns. In this paper, optical property data of real design features were collected from full chips and classified to compare with the same kind of data from OPC test patterns. Therefore sample coverage could be visually mapped according to different optical properties. Design features, which are out of OPC capability, were distinguished by their optical properties and marked as weak-points. New patterns with similar optical properties would be added into model build site-list. Further, an alternative and more efficient method was created in this paper to improve the treatment of issue features and remove weak-points without rebuilding models. Since certain classification of optical properties will generate weak-points, an OPC-integrated repair algorithm was developed and implemented to scan full chip for optical properties, locate those features and then optimize OPC treatment or apply precise sizing on site. This is a named "in-situ" weak-point improvement flow which includes issue feature definition, allocation in full chip and real-time improvement.

  13. Improving surgeon utilization in an orthopedic department using simulation modeling

    Directory of Open Access Journals (Sweden)

    Simwita YW

    2016-10-01

    Full Text Available Yusta W Simwita, Berit I Helgheim Department of Logistics, Molde University College, Molde, Norway Purpose: Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time.Methods: The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization.Results: The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services.Conclusion: This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. Keywords: waiting time, patient, health care process

  14. Steps towards improvement of Latvian geoid model

    Science.gov (United States)

    Janpaule, Inese; Balodis, Janis

    2013-04-01

    The high precision geoid model is essential for the normal height determination when the GNSS positioning methods are used. In Latvia for more than 10 years gravimetric geoid model LV'98 is broadely used by surveyors and scientists. The computation of this model was performed using GRAVSOFT software using gravimetric measurements, digitised gravimetric data and satellite altimetry data over Baltic sea, the estimated accuracy of LV'98 geoid model is 6-8cm. (J. Kaminskis, 2010) However, the accuracy of Latvian geoid model should be improved. In order to aacomplish this task, the evaluation of several methods and test computations have been made. KTH method was developed at the Royal Institute of Technology (KTH) in Stockholm. This method utilizes the least-squares modification of the Stokes integral for the biased, unbiased, and optimum stochastic solutions. The modified Bruns-Stokes integral combines the regional terrestrial gravity data with a global geopotential model (GGM) (R. Kiamehr, 2006). DFHRS (Digital Finite-Element Height Reference Surface) method has been developed at the Karlsruhe University of Applied Sciences, Faculty of Geomatics (R. Jäger, 1999). In the DFHRS concept the area is divided into smaller finite elements - meshes. The height reference surface N in each mesh is calculated by a polynomial in term of (x,y) coordinates. Each group of meshes form a patch, which are related to a set of individual parameters, which are introduced by the datum parametrizations. As an input data the European Gravimetric Geoid Model 1997 (EGG97) and 102 GNSS/levelling points were used. In order to improve the Latvian geoid model quality and accuracy the development of mobile digital zenith telescope for determination of vertical deflections with 0.1" expected accuracy is commenced at University of Latvia, Institute of Geodesy and Geoinformation. The project was started in 2010, the goal of it is to design a portable, cheap and robust instrument, using industrially

  15. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2017-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  16. Similarity Scaling

    Science.gov (United States)

    Schnack, Dalton D.

    In Lecture 10, we introduced a non-dimensional parameter called the Lundquist number, denoted by S. This is just one of many non-dimensional parameters that can appear in the formulations of both hydrodynamics and MHD. These generally express the ratio of the time scale associated with some dissipative process to the time scale associated with either wave propagation or transport by flow. These are important because they define regions in parameter space that separate flows with different physical characteristics. All flows that have the same non-dimensional parameters behave in the same way. This property is called similarity scaling.

  17. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  18. An Improved MUSIC Model for Gibbsite Surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott C.; Bickmore, Barry R.; Tadanier, Christopher J.; Rosso, Kevin M.

    2004-06-01

    Here we use gibbsite as a model system with which to test a recently published, bond-valence method for predicting intrinsic pKa values for surface functional groups on oxides. At issue is whether the method is adequate when valence parameters for the functional groups are derived from ab initio structure optimization of surfaces terminated by vacuum. If not, ab initio molecular dynamics (AIMD) simulations of solvated surfaces (which are much more computationally expensive) will have to be used. To do this, we had to evaluate extant gibbsite potentiometric titration data that where some estimate of edge and basal surface area was available. Applying BET and recently developed atomic force microscopy methods, we found that most of these data sets were flawed, in that their surface area estimates were probably wrong. Similarly, there may have been problems with many of the titration procedures. However, one data set was adequate on both counts, and we applied our method of surface pKa int prediction to fitting a MUSIC model to this data with considerable success—several features of the titration data were predicted well. However, the model fit was certainly not perfect, and we experienced some difficulties optimizing highly charged, vacuum-terminated surfaces. Therefore, we conclude that we probably need to do AIMD simulations of solvated surfaces to adequately predict intrinsic pKa values for surface functional groups.

  19. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  20. Modeling a Sensor to Improve its Efficacy

    CERN Document Server

    Malakar, N K; Knuth, K H

    2013-01-01

    Robots rely on sensors to provide them with information about their surroundings. However, high-quality sensors can be extremely expensive and cost-prohibitive. Thus many robotic systems must make due with lower-quality sensors. Here we demonstrate via a case study how modeling a sensor can improve its efficacy when employed within a Bayesian inferential framework. As a test bed we employ a robotic arm that is designed to autonomously take its own measurements using an inexpensive LEGO light sensor to estimate the position and radius of a white circle on a black field. The light sensor integrates the light arriving from a spatially distributed region within its field of view weighted by its Spatial Sensitivity Function (SSF). We demonstrate that by incorporating an accurate model of the light sensor SSF into the likelihood function of a Bayesian inference engine, an autonomous system can make improved inferences about its surroundings. The method presented here is data-based, fairly general, and made with plu...

  1. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  2. On Internet Topology Modeling and an Improved BA Model

    Directory of Open Access Journals (Sweden)

    Ye XU

    2011-03-01

    Full Text Available Modeling of Internet topology structure is studied in this paper. First, measuring results of Internet topology from CAIDA monitors have been used to produce a complete topology sample. With this sample, research approaches of the frequency-degree power-law, degree-rank power-law and CCDF(d-degree power-law have been studied to outline the network power-law properties. The frequency-degree power-law relationship is found to be with a power exponent of 2.1406. The degree-rank power-law, however, is found to have two phases of power-law relationships with power-exponents of 0.29981 and 0.84639 respectively. Then, we improved the traditional BA model to construct an Internet topology model (Improved BA model, IBA model, and optimized the IBA model in Genetic Algorithm by the power-exponents gained from frequency-degree power and degree-rank power-law analyses in the paper. Generation algorithm for the IBA model was given at last.

  3. An analytically linearized helicopter model with improved modeling accuracy

    Science.gov (United States)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  4. A decision-making model of development intensity based on similarity relationship between land attributes intervened by urban design

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The paper presents a dynamic model intervened by urban design for the decision-making of land development intensity, which expresses the inherent interaction mechanism between lands based on the evaluation of land attributes and their similarity relationship. Each land unit is described with several factors according to their condition and potential for development, such as land function, accessibility, historical site control, landscape control, and so on. Then, the dynamic reference relationship between land units is established according to the similarity relationship between their factors. That means lands with similar conditions tend to have similar development intensities, which expresses the rule of the spontaneous urban development. Therefore, the development intensities of the pending lands can be calculated by the confirmed ones. Furthermore, the system can be actively intervened by adjusting the parameters according to urban design or planning intentions. And the reaction of the system offers effective support and reference for reasonable decision. The system with multiple intervention input is not only a credible tool for deriving development intensities, but also a platform to activate urban design conception. Above all, the system as a socio-technical tool integrates the optimization of form, function and environment, and embodies the principle of impersonality, justice and flexibility in the decision of land development intensity.

  5. When the exception becomes the rule: the disappearance of limiting similarity in the Lotka-Volterra model.

    Science.gov (United States)

    Barabás, György; Meszéna, Géza

    2009-05-07

    We investigate the transition between limiting similarity and coexistence of a continuum in the competitive Lotka-Volterra model. It is known that there exist exceptional cases in which, contrary to the limiting similarity expectation, all phenotypes coexist along a trait axis. Earlier studies established that the distance between surviving phenotypes is in the magnitude of the niche width 2sigma provided that the carrying capacity curve differs from the exceptional one significantly enough. In this paper we studied the outcome of competition for small perturbations of the exceptional (Gaussian) carrying capacity. We found that the average distance between the surviving phenotypes goes to zero when the perturbation vanishes. The number of coexisting species in equilibrium is proportional to the negative logarithm of the perturbation. Nevertheless, the niche width provides a good order of magnitude for the distance between survivors if the perturbations are larger than 10%. Therefore, we conclude that limiting similarity is a good framework of biological thinking despite the lack of an absolute lower bound of similarity.

  6. Using Dark Matter as a Guide to extend Standard Model: Dirac Similarity Principle and Minimum Higgs Hypothesis

    CERN Document Server

    Hwang, W-Y Pauchy

    2011-01-01

    We introduce the "Dirac similarity principle" that states that only those point-like Dirac particles which can interact with the Dirac electron can be observed, such as in the Standard Model. We emphasize that the existing world of the Standard Model is a Dirac world satisfying the Dirac similarity principle and believe that the immediate extension of the Standard Model will remain to be so. On the other hand, we are looking for Higgs particles for the last forty years but something is yet to be found. This leads naturally to the "minimum Higgs hypotheses". Now we know firmly that neutrinos have tiny masses, but in the minimal Standard Model there is no natural sources for such tiny masses. If nothing else, this could be taken as the clue as the signature of the existence of the extra heavy $Z^{\\prime 0}$ since it requires the extra Higgs field, that would help in generating the neutrino tiny masses. Alternatively, we may have missed the right-hand sector for some reason. A simplified version of the left-righ...

  7. Modeling soil water content for vegetation modeling improvement

    Science.gov (United States)

    Cianfrani, Carmen; Buri, Aline; Zingg, Barbara; Vittoz, Pascal; Verrecchia, Eric; Guisan, Antoine

    2016-04-01

    Soil water content (SWC) is known to be important for plants as it affects the physiological processes regulating plant growth. Therefore, SWC controls plant distribution over the Earth surface, ranging from deserts and grassland to rain forests. Unfortunately, only a few data on SWC are available as its measurement is very time consuming and costly and needs specific laboratory tools. The scarcity of SWC measurements in geographic space makes it difficult to model and spatially project SWC over larger areas. In particular, it prevents its inclusion in plant species distribution model (SDMs) as predictor. The aims of this study were, first, to test a new methodology allowing problems of the scarcity of SWC measurements to be overpassed and second, to model and spatially project SWC in order to improve plant SDMs with the inclusion of SWC parameter. The study was developed in four steps. First, SWC was modeled by measuring it at 10 different pressures (expressed in pF and ranging from pF=0 to pF=4.2). The different pF represent different degrees of soil water availability for plants. An ensemble of bivariate models was built to overpass the problem of having only a few SWC measurements (n = 24) but several predictors to include in the model. Soil texture (clay, silt, sand), organic matter (OM), topographic variables (elevation, aspect, convexity), climatic variables (precipitation) and hydrological variables (river distance, NDWI) were used as predictors. Weighted ensemble models were built using only bivariate models with adjusted-R2 > 0.5 for each SWC at different pF. The second step consisted in running plant SDMs including modeled SWC jointly with the conventional topo-climatic variable used for plant SDMs. Third, SDMs were only run using the conventional topo-climatic variables. Finally, comparing the models obtained in the second and third steps allowed assessing the additional predictive power of SWC in plant SDMs. SWC ensemble models remained very good, with

  8. A conceptual model for manufacturing performance improvement

    Directory of Open Access Journals (Sweden)

    M.A. Karim

    2009-07-01

    Full Text Available Purpose: Important performance objectives manufacturers sought can be achieved through adopting the appropriate manufacturing practices. This paper presents a conceptual model proposing relationship between advanced quality practices, perceived manufacturing difficulties and manufacturing performances.Design/methodology/approach: A survey-based approach was adopted to test the hypotheses proposed in this study. The selection of research instruments for inclusion in this survey was based on literature review, the pilot case studies and relevant industrial experience of the author. A sample of 1000 manufacturers across Australia was randomly selected. Quality managers were requested to complete the questionnaire, as the task of dealing with the quality and reliability issues is a quality manager’s major responsibility.Findings: Evidence indicates that product quality and reliability is the main competitive factor for manufacturers. Design and manufacturing capability and on time delivery came second. Price is considered as the least important factor for the Australian manufacturers. Results show that collectively the advanced quality practices proposed in this study neutralize the difficulties manufacturers face and contribute to the most performance objectives of the manufacturers. The companies who have put more emphasize on the advanced quality practices have less problem in manufacturing and better performance in most manufacturing performance indices. The results validate the proposed conceptual model and lend credence to hypothesis that proposed relationship between quality practices, manufacturing difficulties and manufacturing performances.Practical implications: The model shown in this paper provides a simple yet highly effective approach to achieving significant improvements in product quality and manufacturing performance. This study introduces a relationship based ‘proactive’ quality management approach and provides great

  9. An improved nuclear mass model: FRDM (2012)

    Science.gov (United States)

    Moller, Peter

    2011-10-01

    We have developed an improved nuclear mass model which we plan to finalize in 2012, so we designate it FRDM(2012). Relative to our previous mass table in 1995 we do a full four-dimensional variation of the shape coordinates EPS2, EPS3, EPS4, and EPS6, we consider axial asymmetric shape degrees of freedom and we vary the density symmetry parameter L. Other additional features are also implemented. With respect to the Audi 2003 data base we now have an accuracy of 0.57 MeV. We have carefully tested the extrapolation properties of the new mass table by adjusting model parameters to limited data sets and testing on extended data sets and find it is highly reliable in new regions of nuclei. We discuss what the remaining differences between model calculations and experiment tell us about the limitations of the currently used effective single-particle potential and possible extensions. DOE No. DE-AC52-06NA25396.

  10. General relativistic self-similar waves that induce an anomalous acceleration into the standard model of cosmology

    CERN Document Server

    Smoller, Joel

    2012-01-01

    We prove that the Einstein equations in Standard Schwarzschild Coordinates close to form a system of three ordinary differential equations for a family of spherically symmetric, self-similar expansion waves, and the critical ($k=0$) Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology (FRW), is embedded as a single point in this family. Removing a scaling law and imposing regularity at the center, we prove that the family reduces to an implicitly defined one parameter family of distinct spacetimes determined by the value of a new {\\it acceleration parameter} $a$, such that $a=1$ corresponds to FRW. We prove that all self-similar spacetimes in the family are distinct from the non-critical $k\

  11. Results of Roux-en-Y gastric bypass in morbidly obese vs superobese patients: similar body weight loss, correction of comorbidities, and improvement of quality of life.

    Science.gov (United States)

    Suter, Michel; Calmes, Jean-Marie; Paroz, Alexandre; Romy, Sébastien; Giusti, Vittorio

    2009-04-01

    Gastric bypass corrects comorbidities and quality of life similarly in superobese (SO) and morbidly obese (MO) patients despite higher residual weight in SO patients. Prospective cohort study comparing results of primary laparoscopic gastric bypass in MO and SO patients. University hospital and community hospital with common bariatric programs. A total of 492 MO and 133 SO patients treated consecutively between January 1, 1999, and June 30, 2006. Primary laparoscopic Roux-en-Y gastric bypass. Operative morbidity, weight loss, residual body mass index (BMI) (calculated as weight in kilograms divided by height in meters squared), evolution of comorbidities, quality of life, and Bariatric Analysis and Reporting Outcome System score. Surgery was longer in SO patients, but operative morbidity was similar. The MO patients lost a maximum of 15 BMI units and maintained an average loss of 13 BMI units after 6 years, compared with 21 and 17 in SO patients, which corresponds to a 30.1% and 30.7% total body weight loss, respectively. After 6 years, the BMI was less than 35 in more than 90% of MO patients but in less than 50% of SO patients. Despite these differences, improvements in quality of life and comorbidities were impressive and similar in both groups. Although many SO patients remain in the severely obese or MO category, equivalent improvements in quality of life and obesity-related comorbidities indicate that weight loss is not all that matters after bariatric surgery.

  12. On hydrologic similarity: A dimensionless flood frequency model using a generalized geomorphologic unit hydrograph and partial area runoff generation

    Science.gov (United States)

    Sivapalan, Murugesu; Wood, Eric F.; Beven, Keith J.

    1993-01-01

    One of the shortcomings of the original theory of the geomorphologic unit hydrograph (GUH) is that it assumes that runoff is generated uniformly from the entire catchment area. It is now recognized that in many catchments much of the runoff during storm events is produced on partial areas which usually form on narrow bands along the stream network. A storm response model that includes runoff generation on partial areas by both Hortonian and Dunne mechanisms was recently developed by the authors. In this paper a methodology for integrating this partial area runoff generation model with the GUH-based runoff routing model is presented; this leads to a generalized GUH. The generalized GUH and the storm response model are then used to estimate physically based flood frequency distributions. In most previous work the initial moisture state of the catchment had been assumed to be constant for all the storms. In this paper we relax this assumption and allow the initial moisture conditions to vary between storms. The resulting flood frequency distributions are cast in a scaled dimensionless framework where issues such as catchment scale and similarity can be conveniently addressed. A number of experiments are performed to study the sensitivity of the flood frequency response to some of the 'similarity' parameters identified in this formulation. The results indicate that one of the most important components of the derived flood frequency model relates to the specification of processes within the runoff generation model; specifically the inclusion of both saturation excess and Horton infiltration excess runoff production mechanisms. The dominance of these mechanisms over different return periods of the flood frequency distribution can significantly affect the distributional shape and confidence limits about the distribution. Comparisons with observed flood distributions seem to indicate that such mixed runoff production mechanisms influence flood distribution shape. The

  13. Ivabradine and metoprolol differentially affect cardiac glucose metabolism despite similar heart rate reduction in a mouse model of dyslipidemia.

    Science.gov (United States)

    Vaillant, Fanny; Lauzier, Benjamin; Ruiz, Matthieu; Shi, Yanfen; Lachance, Dominic; Rivard, Marie-Eve; Bolduc, Virginie; Thorin, Eric; Tardif, Jean-Claude; Des Rosiers, Christine

    2016-10-01

    While heart rate reduction (HRR) is a target for the management of patients with heart disease, contradictory results were reported using ivabradine, which selectively inhibits the pacemaker If current, vs. β-blockers like metoprolol. This study aimed at testing whether similar HRR with ivabradine vs. metoprolol differentially modulates cardiac energy substrate metabolism, a factor determinant for cardiac function, in a mouse model of dyslipidemia (hApoB(+/+);LDLR(-/-)). Following a longitudinal study design, we used 3- and 6-mo-old mice, untreated or treated for 3 mo with ivabradine or metoprolol. Cardiac function was evaluated in vivo and ex vivo in working hearts perfused with (13)C-labeled substrates to assess substrate fluxes through energy metabolic pathways. Compared with 3-mo-old, 6-mo-old dyslipidemic mice had similar cardiac hemodynamics in vivo but impaired (P ivabradine-treated hearts displayed significantly higher stroke volume values and glycolysis vs. their metoprolol-treated counterparts ex vivo, values for the ivabradine group being often not significantly different from 3-mo-old mice. Further analyses highlighted additional significant cardiac alterations with disease progression, namely in the total tissue level of proteins modified by O-linked N-acetylglucosamine (O-GlcNAc), whose formation is governed by glucose metabolism via the hexosamine biosynthetic pathway, which showed a similar pattern with ivabradine vs. metoprolol treatment. Collectively, our results emphasize the implication of alterations in cardiac glucose metabolism and signaling linked to disease progression in our mouse model. Despite similar HRR, ivabradine, but not metoprolol, preserved cardiac function and glucose metabolism during disease progression.

  14. Improvements to type Ia supernova models

    Science.gov (United States)

    Saunders, Clare M.

    Type Ia Supernovae provided the first strong evidence of dark energy and are still an important tool for measuring the accelerated expansion of the universe. However, future improvements will be limited by systematic uncertainties in our use of Type Ia supernovae as standard candles. Using Type Ia supernovae for cosmology relies on our ability to standardize their absolute magnitudes, but this relies on imperfect models of supernova spectra time series. This thesis is focused on using data from the Nearby Supernova Factory both to understand current sources of uncertainty in standardizing Type Ia supernovae and to develop techniques that can be used to limit uncertainty in future analyses. (Abstract shortened by ProQuest.).

  15. A comprehensive track model for the improvement of corrugation models

    Science.gov (United States)

    Gómez, J.; Vadillo, E. G.; Santamaría, J.

    2006-06-01

    This paper presents a detailed model of the railway track based on wave propagation, suitable for corrugation studies. The model analyses both the vertical and the transverse dynamics of the track. Using the finite strip method (FSM), only the cross-section of the rail must be meshed, and thus it is not necessary to discretise a whole span in 3D. This model takes into account the discrete nature of the support, introducing concepts pertaining to the theory of periodic structures in the formulation. Wave superposition is enriched taking into account the contribution of residual vectors. In this way, the model obtains accurate results when a finite section of railway track is considered. Results for the infinite track have been compared against those presented by Gry and Müller. Aside from the improvements provided by the model presented in this paper, which Gry's and Müller's models do not contemplate, the results arising from the comparison prove satisfactory. Finally, the calculated receptances are compared against the experimental values obtained by the authors, demonstrating a fair degree of adequacy. Finally, these receptances are used within a linear model of corrugation developed by the authors.

  16. Predictive modeling by the cerebellum improves proprioception.

    Science.gov (United States)

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  17. Robust Multiscale Modelling Of Two-Phase Steels On Heterogeneous Hardware Infrastructures By Using Statistically Similar Representative Volume Element

    Directory of Open Access Journals (Sweden)

    Rauch Ł.

    2015-09-01

    Full Text Available The coupled finite element multiscale simulations (FE2 require costly numerical procedures in both macro and micro scales. Attempts to improve numerical efficiency are focused mainly on two areas of development, i.e. parallelization/distribution of numerical procedures and simplification of virtual material representation. One of the representatives of both mentioned areas is the idea of Statistically Similar Representative Volume Element (SSRVE. It aims at the reduction of the number of finite elements in micro scale as well as at parallelization of the calculations in micro scale which can be performed without barriers. The simplification of computational domain is realized by transformation of sophisticated images of material microstructure into artificially created simple objects being characterized by similar features as their original equivalents. In existing solutions for two-phase steels SSRVE is created on the basis of the analysis of shape coefficients of hard phase in real microstructure and searching for a representative simple structure with similar shape coefficients. Optimization techniques were used to solve this task. In the present paper local strains and stresses are added to the cost function in optimization. Various forms of the objective function composed of different elements were investigated and used in the optimization procedure for the creation of the final SSRVE. The results are compared as far as the efficiency of the procedure and uniqueness of the solution are considered. The best objective function composed of shape coefficients, as well as of strains and stresses, was proposed. Examples of SSRVEs determined for the investigated two-phase steel using that objective function are demonstrated in the paper. Each step of SSRVE creation is investigated from computational efficiency point of view. The proposition of implementation of the whole computational procedure on modern High Performance Computing (HPC

  18. Impact of improved snowmelt modelling in a monthly hydrological model.

    Science.gov (United States)

    Folton, Nathalie; Garcia, Florine

    2016-04-01

    The quantification and the management of water resources at the regional scale require hydrological models that are both easy to implement and efficient. To be reliable and robust, these models must be calibrated and validated on a large number of catchments that are representative of various hydro-meteorological conditions, physiographic contexts, and specific hydrological behavior (e.g. mountainous catchments). The GRLoiEau monthly model, with its simple structure and its two free parameters, answer our need of such a simple model. It required the development of a snow routine to model catchments with temporarily snow-covered areas. The snow routine developed here does not claim to represent physical snowmelt processes but rather to simulate them globally on the catchment. The snowmelt equation is based on the degree-day method which is widely used by the hydrological community, in particular in engineering studies (Etchevers 2000). A potential snowmelt (Schaefli et al. 2005) was computed, and the parameters of the snow routine were regionalized for each mountain area. The GRLoiEau parsimonious structure requires meteorological data. They come from the distributed mesoscale atmospheric analysis system SAFRAN, which provides estimations of daily solid and liquid precipitations and temperatures on a regular square grid at the spatial resolution of 8*8 km², throughout France. Potential evapotranspiration was estimated using the formula by Oudin et al. (2005). The aim of this study is to improve the quality of monthly simulations for ungauged basins, in particular for all types of mountain catchments, without increasing the number of free parameters of the model. By using daily SAFRAN data, the production store and snowmelt can be run at a daily time scale. The question then arises whether simulating the monthly flows using a production function at a finer time step would improve the results. And by using the SAFRAN distributed climate series, a distributed approach

  19. How can model comparison help improving species distribution models?

    Science.gov (United States)

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  20. Improving models for urban soundscape systems

    Directory of Open Access Journals (Sweden)

    Lawrence Harvey

    2013-12-01

    Full Text Available Large-scale urban soundscape systems offer novel environments for electroacoustic composers, sound artists and sound designers to extend their practice beyond concert halls, art galleries and screen-based digital media. One such system with 156 loudspeakers was installed in 1991 on the Southgate Arts and Leisure Precinct in central Melbourne. Over the next 15 years another three large multichannel soundscape systems were installed on other sites close to the first. A fifth system was established for a single work of art in 2006. Despite this private and public investment in sound art estimated at over one million Australian dollars, several systems are no longer in operation while some remaining systems require technical and curatorial development to ensure their continued cultural presence. To investigate why some systems had failed, interviews were conducted with key players in the development and operation of the five systems. A report from the interviews was produced and is the basis of this paper framing critical issues for improving models of urban soundscape practice. Following a brief overview of related studies in urban sound practices, and descriptions of the system and original study, key themes that emerged from the interviews are examined.

  1. Simplification and shift in cognition of political difference: applying the geometric modeling to the analysis of semantic similarity judgment.

    Science.gov (United States)

    Kato, Junko; Okada, Kensuke

    2011-01-01

    Perceiving differences by means of spatial analogies is intrinsic to human cognition. Multi-dimensional scaling (MDS) analysis based on Minkowski geometry has been used primarily on data on sensory similarity judgments, leaving judgments on abstractive differences unanalyzed. Indeed, analysts have failed to find appropriate experimental or real-life data in this regard. Our MDS analysis used survey data on political scientists' judgments of the similarities and differences between political positions expressed in terms of distance. Both distance smoothing and majorization techniques were applied to a three-way dataset of similarity judgments provided by at least seven experts on at least five parties' positions on at least seven policies (i.e., originally yielding 245 dimensions) to substantially reduce the risk of local minima. The analysis found two dimensions, which were sufficient for mapping differences, and fit the city-block dimensions better than the Euclidean metric in all datasets obtained from 13 countries. Most city-block dimensions were highly correlated with the simplified criterion (i.e., the left-right ideology) for differences that are actually used in real politics. The isometry of the city-block and dominance metrics in two-dimensional space carries further implications. More specifically, individuals may pay attention to two dimensions (if represented in the city-block metric) or focus on a single dimension (if represented in the dominance metric) when judging differences between the same objects. Switching between metrics may be expected to occur during cognitive processing as frequently as the apparent discontinuities and shifts in human attention that may underlie changing judgments in real situations occur. Consequently, the result has extended strong support for the validity of the geometric models to represent an important social cognition, i.e., the one of political differences, which is deeply rooted in human nature.

  2. Hematological changes as prognostic indicators of survival: similarities between Gottingen minipigs, humans, and other large animal models.

    Directory of Open Access Journals (Sweden)

    Maria Moroni

    Full Text Available BACKGROUND: The animal efficacy rule addressing development of drugs for selected disease categories has pointed out the need to develop alternative large animal models. Based on this rule, the pathophysiology of the disease in the animal model must be well characterized and must reflect that in humans. So far, manifestations of the acute radiation syndrome (ARS have been extensively studied only in two large animal models, the non-human primate (NHP and the canine. We are evaluating the suitability of the minipig as an additional large animal model for development of radiation countermeasures. We have previously shown that the Gottingen minipig manifests hematopoietic ARS phases and symptoms similar to those observed in canines, NHPs, and humans. PRINCIPAL FINDINGS: We establish here the LD50/30 dose (radiation dose at which 50% of the animals succumb within 30 days, and show that at this dose the time of nadir and the duration of cytopenia resemble those observed for NHP and canines, and mimic closely the kinetics of blood cell depletion and recovery in human patients with reversible hematopoietic damage (H3 category, METREPOL approach. No signs of GI damage in terms of diarrhea or shortening of villi were observed at doses up to 1.9 Gy. Platelet counts at days 10 and 14, number of days to reach critical platelet values, duration of thrombocytopenia, neutrophil stress response at 3 hours and count at 14 days, and CRP-to-platelet ratio were correlated with survival. The ratios between neutrophils, lymphocytes and platelets were significantly correlated with exposure to irradiation at different time intervals. SIGNIFICANCE: As a non-rodent animal model, the minipig offers a useful alternative to NHP and canines, with attractive features including ARS resembling human ARS, cost, and regulatory acceptability. Use of the minipig may allow accelerated development of radiation countermeasures.

  3. Voxel inversion of airborne electromagnetic data for improved model integration

    Science.gov (United States)

    Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders

    2014-05-01

    spatially constrained 1D models with 29 layers. For comparison, the SCI inversion models have been gridded on the same grid of the voxel inversion. The new voxel inversion and the classic SCI give similar data fit and inversion models. The voxel inversion decouples the geophysical model from the position of acquired data, and at the same time fits the data as well as the classic SCI inversion. Compared to the classic approach, the voxel inversion is better suited for informing directly (hydro)geological models and for sequential/Joint/Coupled (hydro)geological inversion. We believe that this new approach will facilitate the integration of geophysics, geology and hydrology for improved groundwater and environmental management.

  4. Dynamic Service Negotiation Model Based on Interval Similarity%基于区间相似度的动态服务协商模型

    Institute of Scientific and Technical Information of China (English)

    冯秀珍; 武高峰

    2011-01-01

    To deal with asymmejc information, dynamic environment as well as vagueness and uncertainty of QoS in service negotiation, this paper proposes a dynamic service negotiation model based on interval similarity.This model can be used to forecast the negotiation strategies of the counterpart via using interval similarity and interval estimation, and on this basis to make optimal counter-strategies.A numerical example shows that dynamic service negotiation model is closer to reality behaviors compared with static, and it can effectively improve negotiation efficiency in the dynamic service negotiation environment.%针对服务协商中信息的不对称性、协商环境的动态性以及QoS属性的不确定性和模糊性,提出基于区间相似度的动态服务协商模型.利用区间相似度和区间估计预测对方的协商策略,以此制定己方的最优反策略.算例分析表明,在动态服务协商环境下,该模型比静态协商模型更贴近现实的协商行为,能有效提高协商效率.

  5. Comparative Structural Models of Similarities and Differences between Vehicle and Target in Order to Teach Darwinian Evolution

    Science.gov (United States)

    Marcelos, Maria Fátima; Nagem, Ronaldo L.

    2010-06-01

    Our objective is to contribute to the teaching of Classical Darwinian Evolution by means of a study of analogies and metaphors. Throughout the history of knowledge about Evolution and in Science teaching, tree structures have been used an analogs to refer to Evolution, such as by Darwin in the Tree of Life passage contained in On The Origin of Species (1859). We analyze the analogies and metaphors found in the Darwinian text the Tree of Life and propose Comparative Structural Models of Similarities and Differences between the vehicle and target, considering the viability of their use in teaching Sciences. Our foundation is the Theory of Conceptual Metaphor by Lakoff and Johnson (1980) and the Methodology of Teaching with Analogies—- MECA—by Nagem et al. (2001). The analogies and metaphors were classified and analyzed and the similarities and differences were highlighted. We found conceptual metaphors in the text. The analogies and metaphors in the Tree of Life are complex and appropriate for didactic use, but require an adequate methodological approach.

  6. Similar pattern of peripheral neuropathy in mouse models of type 1 diabetes and Alzheimer’s disease

    Science.gov (United States)

    Jolivalt, Corinne G.; Calcutt, Nigel A.; Masliah, Eliezer

    2011-01-01

    There is an increasing awareness that diabetes has an impact on the CNS and that diabetes is a risk factor for Alzheimer’s disease (AD). Links between AD and diabetes point to impaired insulin signaling as a common mechanism leading to defects in the brain. However, diabetes is predominantly characterized by peripheral, rather than central, neuropathy and despite the common central mechanisms linking AD and diabetes, little is known about the effect of AD on the PNS. In this study, we compared indices of peripheral neuropathy and investigated insulin signaling in the sciatic nerve of insulin-deficient mice and APP overexpressing transgenic mice. Insulin-deficient and APP transgenic mice displayed similar patterns of peripheral neuropathy with decreased motor nerve conduction velocity, thermal hypoalgesia and loss of tactile sensitivity. Phosphorylation of the insulin receptor and GSK3β were similarly affected in insulin-deficient and APP transgenic mice despite significantly different blood glucose and plasma insulin levels and nerve of both models showed accumulation of Aβ-immunoreactive protein. Although diabetes and AD have different primary etiologies, both diseases share many abnormalities in both the brain and the PNS. Our data point to common deficits in the insulin-signaling pathway in both neurodegenerative diseases and support the idea that AD may cause disorders outside the higher CNS. PMID:22178988

  7. Clustering by Pattern Similarity

    Institute of Scientific and Technical Information of China (English)

    Hai-xun Wang; Jian Pei

    2008-01-01

    The task of clustering is to identify classes of similar objects among a set of objects. The definition of similarity varies from one clustering model to another. However, in most of these models the concept of similarity is often based on such metrics as Manhattan distance, Euclidean distance or other Lp distances. In other words, similar objects must have close values in at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. The new similarity concept models a wide range of applications. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks. E-commerce applications, such as collaborative filtering, can also benefit from the new model, because it is able to capture not only the closeness of values of certain leading indicators but also the closeness of (purchasing, browsing, etc.) patterns exhibited by the customers. In addition to the novel similarity model, this paper also introduces an effective and efficient algorithm to detect such clusters, and we perform tests on several real and synthetic data sets to show its performance.

  8. Sprint interval and traditional endurance training induce similar improvements in peripheral arterial stiffness and flow-mediated dilation in healthy humans.

    Science.gov (United States)

    Rakobowchuk, Mark; Tanguay, Sophie; Burgomaster, Kirsten A; Howarth, Krista R; Gibala, Martin J; MacDonald, Maureen J

    2008-07-01

    Low-volume sprint interval training (SIT), or repeated sessions of brief, intense intermittent exercise, elicits metabolic adaptations that resemble traditional high-volume endurance training (ET). The effects of these different forms of exercise training on vascular structure and function remain largely unexplored. To test the hypothesis that SIT and ET would similarly improve peripheral artery distensibility and endothelial function and central artery distensibility, we recruited 20 healthy untrained subjects (age: 23.3 +/- 2.8 yr) and had them perform 6 wk of SIT or ET (n = 5 men and 5 women per group). The SIT group completed four to six 30-s "all-out" Wingate tests separated by 4.5 min of recovery 3 days/wk. The ET group completed 40-60 min of cycling at 65% of their peak oxygen uptake (Vo2peak) 5 days/wk. Popliteal endothelial function, both relative and normalized to shear stimulus, was improved after training in both groups (main effect for time, P < 0.05). Carotid artery distensibility was not statistically altered by training (P = 0.29) in either group; however, popliteal artery distensibility was improved in both groups to the same degree (main effect, P < 0.05). We conclude that SIT is a time-efficient strategy to elicit improvements in peripheral vascular structure and function that are comparable to ET. However, alterations in central artery distensibility may require a longer training stimuli and/or greater initial vascular stiffness than observed in this group of healthy subjects.

  9. Improved discriminative training for generative model

    Institute of Scientific and Technical Information of China (English)

    WU Ya-hui; GUO Jun; LIU Gang

    2009-01-01

    This article proposes a model combination method to enhance the discriminability of the generative model. Generative and discriminative models have different optimization objectives and have their own advantages and drawbacks. The method proposed in this article intends to strike a balance between the two models mentioned above. It extracts the discriminative parameter from the generative model and generates a new model based on a multi-model combination. The weight for combining is determined by the ratio of the inter-variance to the intra-variance of the classes. The higher the ratio is, the greater the weight is, and the more discriminative the model will be. Experiments on speech recognition demonstrate that the performance of the new model outperforms the model trained with the traditional generative method.

  10. Modeling and simulation of self-similar traffic based on FBM model%基于FBM模型的自相似流量建模仿真

    Institute of Scientific and Technical Information of China (English)

    卢颖; 裴承艳; 陈子辰; 康凤举

    2011-01-01

    Network traffic models are important basis of network programming and performance evaluation. The conventional models are mostly based on Poisson model and Markovian franc model,which is only Short-Range Dependence. With the continuous development of network services, studies found that the actual network traffic has a long-range dependence (LRD) now and in a very long time , which is a kind of self-similarity. In this paper, the RMD and Fourier algorithm were adopted to simulate and analyze FBM model, a self-similar model. They generated the necessary sequence of self-similar traffic. Then the article uses R/S method and variance-time method to verify Hurst value of the generated sequence of self-similar traffic in order to verify the self-similarity of the self-similar traffic sequence. The existence of self-similarity is verified by experiments, and the advantage and disadvantage of RMD and Fourier algorithm are analyzed.%网络流量建模是网络规划与性能评价的重要基础。传统的业务模型大多基于泊松模型和马尔可夫排队模型,只具有短程相关性,随着网络业务的不断研究发现,实际网络业务流在很长的时间范围内都具有长程相关性,即一种自相似性。本文采用RMD算法和Fourier变换法对网络流量的自相似模型-FBM模型进行了建模及仿真研究,生成了所需的自相似流量序列。然后分别采用R/S法和方差时间图法对其进行自相似参数检测。结果验证了仿真算法所产生的序列存在着自相似性,并同时对RMD算法和Fourier变换法的优缺点进行了分析。

  11. An Application of the Stereoscopic Self-Similar-Expansion Model to the Determination of CME-Driven Shock Parameters

    CERN Document Server

    Volpes, L

    2015-01-01

    We present an application of the stereoscopic self-similar-expansion model (SSSEM) to Solar Terrestrial Relations Observatory (STEREO)/Sun-Earth Connection Coronal and Heliospheric Investigation (SECCHI) observations of the 03 April 2010 CME and its associated shock. The aim is to verify whether CME-driven shock parameters can be inferred from the analysis of j-maps. For this purpose we use the SSSEM to derive the CME and the shock kinematics. Arrival times and speeds, inferred assuming either propagation at constant speed or with uniform deceleration, show good agreement with Advanced Composition Explorer (ACE) measurements. The shock standoff distance $[\\Delta]$, the density compression $[\\frac{\\rho_d}{\\rho_u}]$ and the Mach number $[M]$ are calculated combining the results obtained for the CME and shock kinematics with models for the shock location. Their values are extrapolated to $\\textrm{L}_1$ and compared to in-situ data. The in-situ standoff distance is obtained from ACE solar-wind measurements, and t...

  12. IMPROVED GENETIC ALGORITHM BASED ON INDIVIDUAL SIMILARITY EVALUATION STRATEGY%基于个体相似性评价策略的改进遗传算法

    Institute of Scientific and Technical Information of China (English)

    汤可宗; 张彤; 罗立民

    2016-01-01

    Genetic algorithm is a method of searching the optimal solution by simulating natural evolutionary process.But it always requires longer computation time for the best solution in solving process.This paper presents an improved genetic algorithm,it is based on individual similarity evaluation strategy.In it a new rotation crossover operator is incorporated.The fitness value of each individual is assigned according to its similarity and reliability with its parents.The real fitness of individual is only evaluated when the reliability value is below a threshold.Experimental results show that the fitness values of individual derived from similarity evaluation strategy are close to the actual ones,and the number of evaluations required for seeking the optimal solution by the improved genetic algorithm is significantly less than that of traditional genetic algorithm.Additionally,the data on test criterion show that the performance of the proposed algorithm and the optimal solution derived from it are relatively better than the traditional genetic algorithm as well.%遗传算法是一种通过模拟自然进化过程搜索最优解的方法。但这种算法在求解最优解过程中总是以计算时间为代价来换得最优解的产生。对此,提出一种基于个体相似`性评价策略的改进遗传算法,融入了一种新的旋转交叉算子,每个子个体根据其与父个体的相似度和可信度来确定个体的适应度值,仅当可信度值低于某个阈值时,个体才做真实的适应度计算。实验结果显示,相似性评价策略计算得到的个体适应度值接近真实的适应度值,并且改进的算法求得最优解需要的评价次数明显要少于传统遗传算法,而在测试准测上的数据表明:提出的改进遗传算法相对于传统遗传算法,性能较好且求得的最优解也较为理想。

  13. Improved engine wall models for Large Eddy Simulation (LES)

    Science.gov (United States)

    Plengsaard, Chalearmpol

    Improved wall models for Large Eddy Simulation (LES) are presented in this research. The classical Werner-Wengle (WW) wall shear stress model is used along with near-wall sub-grid scale viscosity. A sub-grid scale turbulent kinetic energy is employed in a model for the eddy viscosity. To gain better heat flux results, a modified classical variable-density wall heat transfer model is also used. Because no experimental wall shear stress results are available in engines, the fully turbulent developed flow in a square duct is chosen to validate the new wall models. The model constants in the new wall models are set to 0.01 and 0.8, respectively and are kept constant throughout the investigation. The resulting time- and spatially-averaged velocity and temperature wall functions from the new wall models match well with the law-of-the-wall experimental data at Re = 50,000. In order to study the effect of hot air impinging walls, jet impingement on a flat plate is also tested with the new wall models. The jet Reynolds number is equal to 21,000 and a fixed jet-to-plate spacing of H/D = 2.0. As predicted by the new wall models, the time-averaged skin friction coefficient agrees well with experimental data, while the computed Nusselt number agrees fairly well when r/D > 2.0. Additionally, the model is validated using experimental data from a Caterpillar engine operated with conventional diesel combustion. Sixteen different operating engine conditions are simulated. The majority of the predicted heat flux results from each thermocouple location follow similar trends when compared with experimental data. The magnitude of peak heat fluxes as predicted by the new wall models is in the range of typical measured values in diesel combustion, while most heat flux results from previous LES wall models are over-predicted. The new wall models generate more accurate predictions and agree better with experimental data.

  14. An Improved Scalar Costa Scheme Based on Watson Perceptual Model

    Institute of Scientific and Technical Information of China (English)

    QI Kai-yue; CHEN Jian-bo; ZHOU Yi

    2008-01-01

    An improved scalar Costa scheme (SCS) was proposed by using improved Watson perceptual model to adaptively decide quantization step size and scaling factor. The improved scheme equals to embed hiding data based on an actual image. In order to withstand amplitude scaling attack, the Watson perceptual model was redefined, and the improved scheme using the new definition can insure quantization step size in decoder that is proportional to amplitude scaling attack factor. The performance of the improved scheme outperforms that of SCS with fixed quantization step size. The improved scheme combines information theory and visual model.

  15. Integrated geological-engineering model of Patrick Draw field and examples of similarities and differences among various shoreline barrier systems

    Energy Technology Data Exchange (ETDEWEB)

    Schatzinger, R.A.; Szpakiewicz, M.J.; Jackson, S.R.; Chang, M.M.; Sharma, B.; Tham, M.K.; Cheng, A.M.

    1992-04-01

    The Reservoir Assessment and Characterization Research Program at NIPER employs an interdisciplinary approach that focuses on the high priority reservoir class of shoreline barrier deposits to: (1) determine the problems specific to this class of reservoirs by identifying the reservoir heterogeneities that influence the movement and trapping of fluids; and (2) develop methods to characterize effectively this class of reservoirs to predict residual oil saturation (ROS) on interwell scales and improve prediction of the flow patterns of injected and produced fluids. Accurate descriptions of the spatial distribution of critical reservoir parameters (e.g., permeability, porosity, pore geometry, mineralogy, and oil saturation) are essential for designing and implementing processes to improve sweep efficiency and thereby increase oil recovery. The methodologies and models developed in this program will, in the near- to mid-term, assist producers in the implementation of effective reservoir management strategies such as location of infill wells and selection of optimum enhanced oil recovery methods to maximize oil production from their reservoirs.

  16. The Quest for Rheological Similarity in Analogue Models: new Data on the Rheology of Highly Filled Silicon Polymers

    Science.gov (United States)

    Boutelier, D. A.; Schrank, C.; Cruden, A. R.

    2006-12-01

    The selection of appropriate analogue materials is a central consideration in the design of realistic physical models. Hence, information on the rheology of materials and potential materials is essential to evaluate their suitability as rock analogues. Silicon polymers have long been used to model ductile rocks that deform by diffusion or dislocation creep. Temperature and compositional variations that control the effective viscosity and density of rocks in the crust and mantle are simulated in the laboratory by multiple layers of various silicon polymers mixed with granular fillers, plasticines or bouncing putties. Since dislocation creep is a power law, strain rate-softening flow mechanism, we have been investigating the rheology of highly filled silicon polymers as suitable new analogue materials with similar deformation behavior. The materials actually exhibit strain rate softening behavior but with increasing amounts of filler the mixtures also become non-linear. We report the rheological properties of the analogue materials as functions of the filler content. For the linear viscous materials the flow laws are presented (viscosity coefficient and power law exponent). For non-linear materials the relative importance of strain and strain-rate softening/hardening has been investigated doing multiple creep tests that allow mapping of the effective viscosity in the stress-strain space. Our study reveals that most of the currently used silicon-based analogue materials have a linear or quasi-linear rheology but are also Newtonian or nearly-Newtonian viscous fluid, which makes them more appropriate for simulating natural rocks deforming by diffusion creep.

  17. Using sparse LU factorisation to precondition GMRES for a family of similarly structured matrices arising from process modelling

    Energy Technology Data Exchange (ETDEWEB)

    Brooking, C. [Univ. of Bath (United Kingdom)

    1996-12-31

    Process engineering software is used to simulate the operation of large chemical plants. Such simulations are used for a variety of tasks, including operator training. For the software to be of practical use for this, dynamic simulations need to run in real-time. The models that the simulation is based upon are written in terms of Differential Algebraic Equations (DAE`s). In the numerical time-integration of systems of DAE`s using an implicit method such as backward Euler, the solution of nonlinear systems is required at each integration point. When solved using Newton`s method, this leads to the repeated solution of nonsymmetric sparse linear systems. These systems range in size from 500 to 20,000 variables. A typical integration may require around 3000 timesteps, and if 4 Newton iterates were needed on each time step, then this means approximately 12,000 linear systems must be solved. The matrices produced by the simulations have a similar sparsity pattern throughout the integration. They are also severely ill-conditioned, and have widely-scattered spectra.

  18. Device modeling of perovskite solar cells based on structural similarity with thin film inorganic semiconductor solar cells

    Science.gov (United States)

    Minemoto, Takashi; Murata, Masashi

    2014-08-01

    Device modeling of CH3NH3PbI3-xCl3 perovskite-based solar cells was performed. The perovskite solar cells employ a similar structure with inorganic semiconductor solar cells, such as Cu(In,Ga)Se2, and the exciton in the perovskite is Wannier-type. We, therefore, applied one-dimensional device simulator widely used in the Cu(In,Ga)Se2 solar cells. A high open-circuit voltage of 1.0 V reported experimentally was successfully reproduced in the simulation, and also other solar cell parameters well consistent with real devices were obtained. In addition, the effect of carrier diffusion length of the absorber and interface defect densities at front and back sides and the optimum thickness of the absorber were analyzed. The results revealed that the diffusion length experimentally reported is long enough for high efficiency, and the defect density at the front interface is critical for high efficiency. Also, the optimum absorber thickness well consistent with the thickness range of real devices was derived.

  19. Simple improvements to classical bubble nucleation models

    CERN Document Server

    Tanaka, Kyoko K; Angélil, Raymond; Diemand, Jürg

    2015-01-01

    We revisit classical nucleation theory (CNT) for the homogeneous bubble nucleation rate and improve the classical formula using a new prefactor in the nucleation rate. Most of the previous theoretical studies have used the constant prefactor determined by the bubble growth due to the evaporation process from the bubble surface. However, the growth of bubbles is also regulated by the thermal conduction, the viscosity, and the inertia of liquid motion. These effects can decrease the prefactor significantly, especially when the liquid pressure is much smaller than the equilibrium one. The deviation in the nucleation rate between the improved formula and the CNT can be as large as several orders of magnitude. Our improved, accurate prefactor and recent advances in molecular dynamics simulations and laboratory experiments for argon bubble nucleation enable us to precisely constrain the free energy barrier for bubble nucleation. Assuming the correction to the CNT free energy is of the functional form suggested by T...

  20. Twelve Weeks of Sprint Interval Training Improves Indices of Cardiometabolic Health Similar to Traditional Endurance Training despite a Five-Fold Lower Exercise Volume and Time Commitment.

    Directory of Open Access Journals (Sweden)

    Jenna B Gillen

    Full Text Available We investigated whether sprint interval training (SIT was a time-efficient exercise strategy to improve insulin sensitivity and other indices of cardiometabolic health to the same extent as traditional moderate-intensity continuous training (MICT. SIT involved 1 minute of intense exercise within a 10-minute time commitment, whereas MICT involved 50 minutes of continuous exercise per session.Sedentary men (27±8y; BMI = 26±6kg/m2 performed three weekly sessions of SIT (n = 9 or MICT (n = 10 for 12 weeks or served as non-training controls (n = 6. SIT involved 3x20-second 'all-out' cycle sprints (~500W interspersed with 2 minutes of cycling at 50W, whereas MICT involved 45 minutes of continuous cycling at ~70% maximal heart rate (~110W. Both protocols involved a 2-minute warm-up and 3-minute cool-down at 50W.Peak oxygen uptake increased after training by 19% in both groups (SIT: 32±7 to 38±8; MICT: 34±6 to 40±8ml/kg/min; p<0.001 for both. Insulin sensitivity index (CSI, determined by intravenous glucose tolerance tests performed before and 72 hours after training, increased similarly after SIT (4.9±2.5 to 7.5±4.7, p = 0.002 and MICT (5.0±3.3 to 6.7±5.0 x 10-4 min-1 [μU/mL]-1, p = 0.013 (p<0.05. Skeletal muscle mitochondrial content also increased similarly after SIT and MICT, as primarily reflected by the maximal activity of citrate synthase (CS; P<0.001. The corresponding changes in the control group were small for VO2peak (p = 0.99, CSI (p = 0.63 and CS (p = 0.97.Twelve weeks of brief intense interval exercise improved indices of cardiometabolic health to the same extent as traditional endurance training in sedentary men, despite a five-fold lower exercise volume and time commitment.

  1. Improving the physiological realism of experimental models

    NARCIS (Netherlands)

    Vinnakota, Kalyan C.; Cha, Chae Y.; Rorsman, Patrik; Balaban, Robert S.; La Gerche, Andre; Wade-Martins, Richard; Beard, Daniel A.; Jeneson, Jeroen A. L.

    The Virtual Physiological Human (VPH) project aims to develop integrative, explanatory and predictive computational models (C-Models) as numerical investigational tools to study disease, identify and design effective therapies and provide an in silico platform for drug screening. Ultimately, these

  2. Improving Environmental Model Calibration and Prediction

    Science.gov (United States)

    2011-01-18

    groundwater model calibration. Adv. Water Resour., 29(4):605–623, 2006. [9] B.E. Skahill, J.S. Baggett, S. Frankenstein , and C.W. Downer. More efficient...of Hydrology, Environmental Modelling & Software, or Water Resources Research). Skahill, B., Baggett, J., Frankenstein , S., and Downer, C.W. (2009

  3. Improving the physiological realism of experimental models

    NARCIS (Netherlands)

    Vinnakota, Kalyan C.; Cha, Chae Y.; Rorsman, Patrik; Balaban, Robert S.; La Gerche, Andre; Wade-Martins, Richard; Beard, Daniel A.; Jeneson, Jeroen A. L.

    2016-01-01

    The Virtual Physiological Human (VPH) project aims to develop integrative, explanatory and predictive computational models (C-Models) as numerical investigational tools to study disease, identify and design effective therapies and provide an in silico platform for drug screening. Ultimately, these m

  4. Running performance in the heat is improved by similar magnitude with pre-exercise cold-water immersion and mid-exercise facial water spray.

    Science.gov (United States)

    Stevens, Christopher J; Kittel, Aden; Sculley, Dean V; Callister, Robin; Taylor, Lee; Dascombe, Ben J

    2017-04-01

    This investigation compared the effects of external pre-cooling and mid-exercise cooling methods on running time trial performance and associated physiological responses. Nine trained male runners completed familiarisation and three randomised 5 km running time trials on a non-motorised treadmill in the heat (33°C). The trials included pre-cooling by cold-water immersion (CWI), mid-exercise cooling by intermittent facial water spray (SPRAY), and a control of no cooling (CON). Temperature, cardiorespiratory, muscular activation, and perceptual responses were measured as well as blood concentrations of lactate and prolactin. Performance time was significantly faster with CWI (24.5 ± 2.8 min; P = 0.01) and SPRAY (24.6 ± 3.3 min; P = 0.01) compared to CON (25.2 ± 3.2 min). Both cooling strategies significantly (P < 0.05) reduced forehead temperatures and thermal sensation, and increased muscle activation. Only pre-cooling significantly lowered rectal temperature both pre-exercise (by 0.5 ± 0.3°C; P < 0.01) and throughout exercise, and reduced sweat rate (P < 0.05). Both cooling strategies improved performance by a similar magnitude, and are ergogenic for athletes. The observed physiological changes suggest some involvement of central and psychophysiological mechanisms of performance improvement.

  5. Improvements on Mean Free Wave Surface Modeling

    Institute of Scientific and Technical Information of China (English)

    董国海; 滕斌; 程亮

    2002-01-01

    Some new results of the modeling of mean free surface of waves or wave set-up are presented. The stream function wave theory is applied to incident short waves. The limiting wave steepness is adopted as the wave breaker index in the calculation of wave breaking dissipation. The model is based on Roelvink (1993), but the numerical techniques used in the solution are based on the Weighted-Average Flux (WAF) method (Watson et al., 1992), with Time-Operator-Splitting (TOS) used for the treatment of the source terms. This method allows a small number of computational points to be used, and is particularly efficient in modeling wave set-up. The short wave (or incident primary wave) energy equation issolved by use of a traditional Lax-Wendroff technique. The present model is found to be satisfactory compared with the measurements conducted by Stive (1983).

  6. Improvement of Similarity Measure:Pearson Product-Moment Correlation Coefficient%相似度的评价指标相关系数的改进

    Institute of Scientific and Technical Information of China (English)

    刘永锁; 孟庆华; 陈蓉; 王健松; 蒋淑敏; 胡育筑

    2004-01-01

    Aim To study the reason of the insensitiveness of Pearson product-moment correlation coefficient as a similarity measure and the method to improve its sensitivity. Methods Experimental and simulated data sets were used. Results The distribution range of the data sets influences the sensitivity of Pearson product-moment correlation coefficient.Weighted Pearson product-moment correlation coefficient is more sensitive when the range of the data set is large. Conclusion Weighted Pearson product-moment correlation coefficient is necessary when the range of the data set is large.%目的研究相似度的评价指标:相关系数的灵敏度低的原因及其改进的方法.方法利用实验数据和模拟数据研究相关系数的灵敏度低的问题.结果相关系数的灵敏度受数据的分布范围的影响,在数据的分布范围宽时加权相关系数更灵敏.结论在数据的分布范围宽时有必要进行加权运算.

  7. High-Intensity Interval Training and Isocaloric Moderate-Intensity Continuous Training Result in Similar Improvements in Body Composition and Fitness in Obese Individuals.

    Science.gov (United States)

    Martins, Catia; Kazakova, Irina; Ludviksen, Marit; Mehus, Ingar; Wisloff, Ulrik; Kulseng, Bard; Morgan, Linda; King, Neil

    2016-06-01

    This study aimed to determine the effects of 12 weeks of isocaloric programs of high-intensity intermittent training (HIIT) or moderate-intensity continuous training (MICT) or a short-duration HIIT (1/2HIIT) inducing only half the energy deficit on a cycle ergometer, on body weight and composition, cardiovascular fitness, resting metabolism rate (RMR), respiratory exchange ratio (RER), nonexercise physical activity (PA) levels and fasting and postprandial insulin response in sedentary obese individuals. Forty-six sedentary obese individuals (30 women), with a mean BMI of 33.3 ± 2.9 kg/m2 and a mean age of 34.4 ± 8.8 years were randomly assigned to one of the three training groups: HIIT (n = 16), MICT (n = 14) or 1/2HIIT (n = 16) and exercise was performed 3 times/week for 12 weeks. Overall, there was a significant reduction in body weight, waist (p training protocols of HIIT or MICT (or 1/2HIIT inducing only half the energy deficit) exert similar metabolic and cardiovascular improvements in sedentary obese individuals.

  8. IMPROVEMENT OF FLUID PIPE LUMPED PARAMETER MODEL

    Institute of Scientific and Technical Information of China (English)

    Kong Xiaowu; Wei Jianhua; Qiu Minxiu; Wu Genmao

    2004-01-01

    The traditional lumped parameter model of fluid pipe is introduced and its drawbacks are pointed out.Furthermore, two suggestions are put forward to remove these drawbacks.Firstly, the structure of equivalent circuit is modified, and then the evaluation of equivalent fluid resistance is change to take the frequency-dependent friction into account.Both simulation and experiment prove that this model is precise to characterize the dynamic behaviors of fluid in pipe.

  9. A Model to Improve the Quality Products

    Directory of Open Access Journals (Sweden)

    Hasan GOKKAYA

    2010-08-01

    Full Text Available The topic of this paper is to present a solution who can improve product qualityfollowing the idea: “Unlike people who have verbal skills, machines use "sign language"to communicate what hurts or what has invaded their system’. Recognizing the "signs"or symptoms that the machine conveys is a required skill for those who work withmachines and are responsible for their care and feeding. The acoustic behavior of technical products is predominantly defined in the design stage, although the acoustic characteristics of machine structures can be analyze and give a solution for the actual products and create a new generation of products. The paper describes the steps intechnological process for a product and the solution who will reduce the costs with the non-quality of product and improve the management quality.

  10. A Model to Improve the Quality Products

    OpenAIRE

    2010-01-01

    The topic of this paper is to present a solution who can improve product quality following the idea: “Unlike people who have verbal skills, machines use "sign language" to communicate what hurts or what has invaded their system’. Recognizing the "signs" or symptoms that the machine conveys is a required skill for those who work with machines and are responsible for their care and feeding. The acoustic behavior of technical products is predominantly defined in the design stage, although the ac...

  11. Simple improvements to classical bubble nucleation models

    Science.gov (United States)

    Tanaka, Kyoko K.; Tanaka, Hidekazu; Angélil, Raymond; Diemand, Jürg

    2015-08-01

    We revisit classical nucleation theory (CNT) for the homogeneous bubble nucleation rate and improve the classical formula using a correct prefactor in the nucleation rate. Most of the previous theoretical studies have used the constant prefactor determined by the bubble growth due to the evaporation process from the bubble surface. However, the growth of bubbles is also regulated by the thermal conduction, the viscosity, and the inertia of liquid motion. These effects can decrease the prefactor significantly, especially when the liquid pressure is much smaller than the equilibrium one. The deviation in the nucleation rate between the improved formula and the CNT can be as large as several orders of magnitude. Our improved, accurate prefactor and recent advances in molecular dynamics simulations and laboratory experiments for argon bubble nucleation enable us to precisely constrain the free energy barrier for bubble nucleation. Assuming the correction to the CNT free energy is of the functional form suggested by Tolman, the precise evaluations of the free energy barriers suggest the Tolman length is ≃0.3 σ independently of the temperature for argon bubble nucleation, where σ is the unit length of the Lennard-Jones potential. With this Tolman correction and our prefactor one gets accurate bubble nucleation rate predictions in the parameter range probed by current experiments and molecular dynamics simulations.

  12. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2017-05-04

    The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Clarithromycin and dexamethasone show similar anti-inflammatory effects on distinct phenotypic chronic rhinosinusitis: an explant model study.

    Science.gov (United States)

    Zeng, Ming; Li, Zhi-Yong; Ma, Jin; Cao, Ping-Ping; Wang, Heng; Cui, Yong-Hua; Liu, Zheng

    2015-06-06

    , epidermal growth factor, basic fibroblast growth factor, platelet derived growth factor, vascular endothelial growth factor, and matrix metalloproteinase 9) was suppressed, in different phenotypic CRS by dexamethasone and clarithromycin in comparable extent. Out of our expectation, our explant model study discovered herein that glucocorticoids and macrolides likely exerted similar regulatory actions on CRS and most of their effects did not vary by the phenotypes of CRS.

  14. An Improved Data Model for Uncertain Data

    Directory of Open Access Journals (Sweden)

    Umar Hayat

    2016-01-01

    Full Text Available Uncertain data can be categorized as imprecise data and probabilistic data. In each of these categories, the uncertainty can be found at different granularity levels. Conventional data models are developed for the purpose of storing, manipulating and retrieving certain data. These data models do not extend their support for the management of uncertain data. Thus, a standalone data model is required aimed at storing, manipulating and retrieving certain as well as uncertain data. In this paper we introduce UDM relations, an uncertain data model for the management of uncertain data along with certain data. Vertical partitioning approach is used to translate an uncertain relation into UDM-relations. Our data model supports ALU (Attribute-Level Uncertainty as well as TLU (Tuple-Level Uncertainty for the finite sets of possible worlds. It follows the concept of standard relational database technology. With slight modifications to standard relational algebra operators, we have introduced four relational operators that are used to evaluate a query on UDM-relations.

  15. An improved likelihood model for eye tracking

    DEFF Research Database (Denmark)

    Hammoud, Riad I.; Hansen, Dan Witzner

    2007-01-01

    approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a person's eyes between successively produced video image frames, even...... are challenging. It proposes a log likelihood-ratio function of foreground and background models in a particle filter-based eye tracking framework. It fuses key information from even, odd infrared fields (dark and bright-pupil) and their corresponding subtractive image into one single observation model...

  16. General Equilibrium Models: Improving the Microeconomics Classroom

    Science.gov (United States)

    Nicholson, Walter; Westhoff, Frank

    2009-01-01

    General equilibrium models now play important roles in many fields of economics including tax policy, environmental regulation, international trade, and economic development. The intermediate microeconomics classroom has not kept pace with these trends, however. Microeconomics textbooks primarily focus on the insights that can be drawn from the…

  17. Improving modeling with layered UML diagrams

    DEFF Research Database (Denmark)

    Störrle, Harald

    2013-01-01

    Layered diagrams are diagrams whose elements are organized into sets of layers. Layered diagrams are routinely used in many branches of engineering, except Software Engineering. In this paper, we propose to add layered diagrams to UML modeling tools, and elaborate the concept by exploring usage...

  18. Models for Evaluating and Improving Architecture Competence

    Science.gov (United States)

    2008-03-01

    to report on the second. We propose to use the duties from the DSK model to isolate the various aspects of the architect’s job. If we are auditing ...for Scenario Analysis.” Proceedings of the Fifth International Workshop on Product Family Engineering ( PFE -5). Sien- na, Italy, 2003, Springer

  19. Hybrid Modeling Improves Health and Performance Monitoring

    Science.gov (United States)

    2007-01-01

    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  20. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  1. Improvement of core degradation model in ISAAC

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Ha; Kim, See Darl; Park, Soo Yong

    2004-02-01

    If water inventory in the fuel channels depletes and fuel rods are exposed to steam after uncover in the pressure tube, the decay heat generated from fuel rods is transferred to the pressure tube and to the calandria tube by radiation, and finally to the moderator in the calandria tank by conduction. During this process, the cladding will be heated first and ballooned when the fuel gap internal pressure exceeds the primary system pressure. The pressure tube will be also ballooned and will touch the calandria tube, increasing heat transfer rate to the moderator. Although these situation is not desirable, the fuel channel is expected to maintain its integrity as long as the calandria tube is submerged in the moderator, because the decay heat could be removed to the moderator through radiation and conduction. Therefore, loss of coolant and moderator inside and outside the channel may cause severe core damage including horizontal fuel channel sagging and finally loss of channel integrity. The sagged channels contact with the channels located below and lose their heat transfer area to the moderator. As the accident goes further, the disintegrated fuel channels will be heated up and relocated onto the bottom of the calandria tank. If the temperature of these relocated materials is high enough to attack the calandria tank, the calandria tank would fail and molten material would contact with the calandria vault water. Steam explosion and/or rapid steam generation from this interaction may threaten containment integrity. Though a detailed model is required to simulate the severe accident at CANDU plants, complexity of phenomena itself and inner structures as well as lack of experimental data forces to choose a simple but reasonable model as the first step. ISAAC 1.0 was developed to model the basic physicochemical phenomena during the severe accident progression. At present, ISAAC 2.0 is being developed for accident management guide development and strategy evaluation. In

  2. Improved Approximations for Some Polymer Extension Models

    CERN Document Server

    Petrosyan, Rafayel

    2016-01-01

    We propose approximations for force-extension dependencies for the freely jointed chain (FJC) and worm-like chain (WLC) models as well as for extension-force dependence for the WLC model. Proposed expressions show less than 1% relative error in the useful range of the corresponding variables. These results can be applied for fitting force-extension curves obtained in molecular force spectroscopy experiments. Particularly they can be useful for cases where one has geometries of springs in series and/or in parallel where particular combination of expressions should be used for fitting the data. All approximations have been obtained following the same procedure of determining the asymptotes and then reducing the relative error of that expression by adding an appropriate term obtained from fitting its absolute error.

  3. Improving lognormal models for cosmological fields

    CERN Document Server

    Xavier, Henrique S; Joachimi, Benjamin

    2016-01-01

    It is common practice in cosmology to model large-scale structure observables as lognormal random fields, and this approach has been successfully applied in the past to the matter density and weak lensing convergence fields separately. We argue that this approach has fundamental limitations which prevent its use for jointly modelling these two fields since the lognormal distribution's shape can prevent certain correlations to be attainable. Given the need of ongoing and future large-scale structure surveys for fast joint simulations of clustering and weak lensing, we propose two ways of overcoming these limitations. The first approach slightly distorts the power spectra of the fields using one of two algorithms that minimises either the absolute or the fractional distortions. The second one is by obtaining more accurate convergence marginal distributions, for which we provide a fitting function, by integrating the lognormal density along the line of sight. The latter approach also provides a way to determine ...

  4. Improved testing inference in mixed linear models

    CERN Document Server

    Melo, Tatiane F N; Cribari-Neto, Francisco; 10.1016/j.csda.2008.12.007

    2011-01-01

    Mixed linear models are commonly used in repeated measures studies. They account for the dependence amongst observations obtained from the same experimental unit. Oftentimes, the number of observations is small, and it is thus important to use inference strategies that incorporate small sample corrections. In this paper, we develop modified versions of the likelihood ratio test for fixed effects inference in mixed linear models. In particular, we derive a Bartlett correction to such a test and also to a test obtained from a modified profile likelihood function. Our results generalize those in Zucker et al. (Journal of the Royal Statistical Society B, 2000, 62, 827-838) by allowing the parameter of interest to be vector-valued. Additionally, our Bartlett corrections allow for random effects nonlinear covariance matrix structure. We report numerical evidence which shows that the proposed tests display superior finite sample behavior relative to the standard likelihood ratio test. An application is also presente...

  5. Improving randomness characterization through Bayesian model selection

    CERN Document Server

    R., Rafael Díaz-H; Martínez, Alí M Angulo; U'Ren, Alfred B; Hirsch, Jorge G; Marsili, Matteo; Castillo, Isaac Pérez

    2016-01-01

    Nowadays random number generation plays an essential role in technology with important applications in areas ranging from cryptography, which lies at the core of current communication protocols, to Monte Carlo methods, and other probabilistic algorithms. In this context, a crucial scientific endeavour is to develop effective methods that allow the characterization of random number generators. However, commonly employed methods either lack formality (e.g. the NIST test suite), or are inapplicable in principle (e.g. the characterization derived from the Algorithmic Theory of Information (ATI)). In this letter we present a novel method based on Bayesian model selection, which is both rigorous and effective, for characterizing randomness in a bit sequence. We derive analytic expressions for a model's likelihood which is then used to compute its posterior probability distribution. Our method proves to be more rigorous than NIST's suite and the Borel-Normality criterion and its implementation is straightforward. We...

  6. An improved network model for railway traffic

    Science.gov (United States)

    Li, Keping; Ma, Xin; Shao, Fubo

    In railway traffic, safety analysis is a key issue for controlling train operation. Here, the identification and order of key factors are very important. In this paper, a new network model is constructed for analyzing the railway safety, in which nodes are regarded as causation factors and links represent possible relationships among those factors. Our aim is to give all these nodes an importance order, and to find the in-depth relationship among these nodes including how failures spread among them. Based on the constructed network model, we propose a control method to ensure the safe state by setting each node a threshold. As the results, by protecting the Hub node of the constructed network, the spreading of railway accident can be controlled well. The efficiency of such a method is further tested with the help of numerical example.

  7. Improving Pulsar Distances by Modelling Interstellar Scattering

    CERN Document Server

    Deshpande, A A

    1998-01-01

    We present here a method to study the distribution of electron density fluctuations in pulsar directions as well as to estimate pulsar distances. The method, based on a simple two-component model of the scattering medium discussed by Gwinn et al. (1993), uses scintillation & proper motion data in addition to the measurements of angular broadening & temporal broadening to solve for the model parameters, namely, the fractional distance to a discrete scatterer and the ascociated relative scattering strength. We show how this method can be used to estimate pulsar distances reliably, when the location of a discrete scatterer (e.g. an HII region), if any, is known. Considering the specific example of PSR B0736-40, we illustrate how a simple characterization of the Gum nebula region (using the data on the Vela pulsar) is possible and can be used along with the temporal broadening measurements to estimate pulsar distances.

  8. Similarity Theory of Withdrawn Water Temperature Experiment

    Directory of Open Access Journals (Sweden)

    Yunpeng Han

    2015-01-01

    Full Text Available Selective withdrawal from a thermal stratified reservoir has been widely utilized in managing reservoir water withdrawal. Besides theoretical analysis and numerical simulation, model test was also necessary in studying the temperature of withdrawn water. However, information on the similarity theory of the withdrawn water temperature model remains lacking. Considering flow features of selective withdrawal, the similarity theory of the withdrawn water temperature model was analyzed theoretically based on the modification of governing equations, the Boussinesq approximation, and some simplifications. The similarity conditions between the model and the prototype were suggested. The conversion of withdrawn water temperature between the model and the prototype was proposed. Meanwhile, the fundamental theory of temperature distribution conversion was firstly proposed, which could significantly improve the experiment efficiency when the basic temperature of the model was different from the prototype. Based on the similarity theory, an experiment was performed on the withdrawn water temperature which was verified by numerical method.

  9. Performance Improvement/HPT Model: Guiding the Process

    Science.gov (United States)

    Dessinger, Joan Conway; Moseley, James L.; Van Tiem, Darlene M.

    2012-01-01

    This commentary is part of an ongoing dialogue that began in the October 2011 special issue of "Performance Improvement"--Exploring a Universal Performance Model for HPT: Notes From the Field. The performance improvement/HPT (human performance technology) model represents a unifying process that helps accomplish successful change, create…

  10. Development of Improved Dynamic Failure Models.

    Science.gov (United States)

    1985-02-15

    anisotropic yielding, and Bauschinger effect observed in to,- VI-3 & .... experiments of Naghdi et al.8 Bazant has recently begun formulation of a r...Plasticity," J. Appl. Mech. 25 201-209 (1958). 9. Z. P. Bazant and B. R. Oh, "Hicroplane Model for Fracture Analysis of Concrete Structures," Proceedings of...Yield Criteria and Flow Rules for Porous Ductile Media," J. Engin. Materials and Tech., Trans. of ASHE, Jan . 1977, pp. 2-15. VIII-29 e7 . . -4

  11. QUALITY IMPROVEMENT MODEL AT THE MANUFACTURING PROCESS PREPARATION LEVEL

    Directory of Open Access Journals (Sweden)

    Dusko Pavletic

    2009-12-01

    Full Text Available The paper expresses base for an operational quality improvement model at the manufacturing process preparation level. A numerous appropriate related quality assurance and improvement methods and tools are identified. Main manufacturing process principles are investigated in order to scrutinize one general model of manufacturing process and to define a manufacturing process preparation level. Development and introduction of the operational quality improvement model is based on a research conducted and results of methods and tools application possibilities in real manufacturing processes shipbuilding and automotive industry. Basic model structure is described and presented by appropriate general algorithm. Operational quality improvement model developed lays down main guidelines for practical and systematic application of quality improvements methods and tools.

  12. Improved continuity of care in a community teaching hospital model.

    Science.gov (United States)

    Mittal, V; David, W; Young, S; McKendrick, A; Gentile, T; Casalou, R

    1999-05-01

    addition to fulfilling the Surgery Residency Review Committee requirements, we believe our model facilitates broader education of surgical residents and improves risk management. We recommend further similar studies, greater involvement of primary care specialties in recruiting staff surgical referrals, and implementation of a specialized computer program to continue to improve continuity of care in surgery residency programs.

  13. Improving a Mars photochemical model with thermodynamics

    Science.gov (United States)

    Delgado-Bonal, A.; Martin-Torres, F. J.; Simoncini, E.

    2012-12-01

    Different conditions of temperature and pressure drive the chemistry of a planetary atmosphere to different steady states. However, the different thermodynamic conditions are not considered in many studies about the abundance of liquid water, methane or other important compounds called sometimes biomarkers, leading to wrong calculations. We have adapted a photochemical model for Mars atmosphere [1] to the proper thermodynamical conditions and coupled it with realistic profiles of Temperature and Pressure previously calculated with PRAMS GCM. As we have shown previously [2], the influence of T,P and molar fraction in the Gibbs Free Energy calculations and therefore in the kinetics of the gases in a planetary atmosphere has a huge influence in the final steady state and concentrations. The study is applied to different compounds and determine their abundance with real Mars conditions. The existence and reactivity of liquid water on Mars is highly linked with the presence of other compounds in the atmosphere such as Ozone, OH or CH4, and the determination of those also require the thermodynamical studies. [1 ] Franck Lefèvre and François Forget. Observed variations of methane on Mars unexplained by known atmospheric chemistry and physics. Nature 460, 720-723 (6 August 2009) [2] Simoncini E., Delgado-Bonal A., Martin-Torres F.J., Accounting thermodynamic conditions in chemical models of planetary atmospheres. Submitted to Astrophysical Journal.

  14. An Improved Walk Model for Train Movement on Railway Network

    Institute of Scientific and Technical Information of China (English)

    LI Ke-Ping; MAO Bo-Hua; GAO Zi-You

    2009-01-01

    In this paper, we propose an improved walk model for simulating the train movement on railway network. In the proposed method, walkers represent trains. The improved walk model is a kind of the network-based simulation analysis model. Using some management rules for walker movement, walker can dynamically determine its departure and arrival times at stations. In order to test the proposed method, we simulate the train movement on a part of railway network. The numerical simulation and analytical results demonstrate that the improved model is an effective tool for simulating the train movement on railway network. Moreover, it can well capture the characteristic behaviors of train scheduling in railway traffic.

  15. Advanced Model for Extreme Lift and Improved Aeroacoustics (AMELIA)

    Science.gov (United States)

    Lichtwardt, Jonathan; Paciano, Eric; Jameson, Tina; Fong, Robert; Marshall, David

    2012-01-01

    With the very recent advent of NASA's Environmentally Responsible Aviation Project (ERA), which is dedicated to designing aircraft that will reduce the impact of aviation on the environment, there is a need for research and development of methodologies to minimize fuel burn, emissions, and reduce community noise produced by regional airliners. ERA tackles airframe technology, propulsion technology, and vehicle systems integration to meet performance objectives in the time frame for the aircraft to be at a Technology Readiness Level (TRL) of 4-6 by the year of 2020 (deemed N+2). The proceeding project that investigated similar goals to ERA was NASA's Subsonic Fixed Wing (SFW). SFW focused on conducting research to improve prediction methods and technologies that will produce lower noise, lower emissions, and higher performing subsonic aircraft for the Next Generation Air Transportation System. The work provided in this investigation was a NASA Research Announcement (NRA) contract #NNL07AA55C funded by Subsonic Fixed Wing. The project started in 2007 with a specific goal of conducting a large-scale wind tunnel test along with the development of new and improved predictive codes for the advanced powered-lift concepts. Many of the predictive codes were incorporated to refine the wind tunnel model outer mold line design. The large scale wind tunnel test goal was to investigate powered lift technologies and provide an experimental database to validate current and future modeling techniques. Powered-lift concepts investigated were Circulation Control (CC) wing in conjunction with over-the-wing mounted engines to entrain the exhaust to further increase the lift generated by CC technologies alone. The NRA was a five-year effort; during the first year the objective was to select and refine CESTOL concepts and then to complete a preliminary design of a large-scale wind tunnel model for the large scale test. During the second, third, and fourth years the large-scale wind

  16. Can satellite land surface temperature data be used similarly to ground discharge measurements for distributed hydrological model calibration?

    NARCIS (Netherlands)

    Corbari, C.; Mancini, M.; Li, J.; Su, Zhongbo

    2015-01-01

    This study proposes a new methodology for the calibration of distributed hydrological models at basin scale by constraining an internal model variable using satellite data of land surface temperature. The model algorithm solves the system of energy and mass balances in terms of a representative equi

  17. Segmentation Similarity and Agreement

    CERN Document Server

    Fournier, Chris

    2012-01-01

    We propose a new segmentation evaluation metric, called segmentation similarity (S), that quantifies the similarity between two segmentations as the proportion of boundaries that are not transformed when comparing them using edit distance, essentially using edit distance as a penalty function and scaling penalties by segmentation size. We propose several adapted inter-annotator agreement coefficients which use S that are suitable for segmentation. We show that S is configurable enough to suit a wide variety of segmentation evaluations, and is an improvement upon the state of the art. We also propose using inter-annotator agreement coefficients to evaluate automatic segmenters in terms of human performance.

  18. improved mathematical models for particle-size distribution data ...

    African Journals Online (AJOL)

    BirukEdimon

    bimodal curve fitting models are shown to give an extremely ... property for assessing the likely behavior of granular ... a need to investigate alternative better models for fitting the .... other in a similar way to the shape parameters used for the ...

  19. Introduction of the conditional correlated Bernoulli model of similarity value distributions and its application to the prospective prediction of fingerprint search performance.

    Science.gov (United States)

    Vogt, Martin; Bajorath, Jürgen

    2011-10-24

    A statistical approach named the conditional correlated Bernoulli model is introduced for modeling of similarity scores and predicting the potential of fingerprint search calculations to identify active compounds. Fingerprint features are rationalized as dependent Bernoulli variables and conditional distributions of Tanimoto similarity values of database compounds given a reference molecule are assessed. The conditional correlated Bernoulli model is utilized in the context of virtual screening to estimate the position of a compound obtaining a certain similarity value in a database ranking. Through the generation of receiver operating characteristic curves from cumulative distribution functions of conditional similarity values for known active and random database compounds, one can predict how successful a fingerprint search might be. The comparison of curves for different fingerprints makes it possible to identify fingerprints that are most likely to identify new active molecules in a database search given a set of known reference molecules.

  20. Improvement of energy model based on cubic interpolation curve

    Institute of Scientific and Technical Information of China (English)

    Li Peipei; Li Xuemei; and Wei Yu

    2012-01-01

    In CAGD and CG, energy model is often used to control the curves and surfaces shape. In curve/surface modeling, we can get fair curve/surface by minimizing the energy of curve/surface. However, our research indicates that in some cases we can't get fair curves/surface using the current energy model. So an improved energy model is presented in this paper. Examples are also included to show that fair curves can be obtained using the improved energy model.

  1. Motivation to Improve Work through Learning: A Conceptual Model

    Directory of Open Access Journals (Sweden)

    Kueh Hua Ng

    2014-12-01

    Full Text Available This study aims to enhance our current understanding of the transfer of training by proposing a conceptual model that supports the mediating role of motivation to improve work through learning about the relationship between social support and the transfer of training. The examination of motivation to improve work through motivation to improve work through a learning construct offers a holistic view pertaining to a learner's profile in a workplace setting, which emphasizes learning for the improvement of work performance. The proposed conceptual model is expected to benefit human resource development theory building, as well as field practitioners by emphasizing the motivational aspects crucial for successful transfer of training.

  2. Improved Solar-Radiation-Pressure Models for GPS Satellites

    Science.gov (United States)

    Bar-Sever, Yoaz; Kuang, Da

    2006-01-01

    A report describes a series of computational models conceived as an improvement over prior models for determining effects of solar-radiation pressure on orbits of Global Positioning System (GPS) satellites. These models are based on fitting coefficients of Fourier functions of Sun-spacecraft- Earth angles to observed spacecraft orbital motions.

  3. Bayesian Data Assimilation for Improved Modeling of Road Traffic

    NARCIS (Netherlands)

    Van Hinsbergen, C.P.Y.

    2010-01-01

    This thesis deals with the optimal use of existing models that predict certain phenomena of the road traffic system. Such models are extensively used in Advanced Traffic Information Systems (ATIS), Dynamic Traffic Management (DTM) or Model Predictive Control (MPC) approaches in order to improve the

  4. Matter of Similarity and Dissimilarity in Multi-Ethnic Society: A Model of Dyadic Cultural Norms Congruence

    OpenAIRE

    Abu Bakar Hassan; Mohamad Bahtiar

    2017-01-01

    Taking this into consideration of diver cultural norms in Malaysian workplace, the propose model explores Malaysia culture and identity with a backdrop of the pushes and pulls of ethnic diversity in a Malaysia. The model seeks to understand relational norm congruence based on multiethnic in Malaysia that will be enable us to identify Malaysia cultural and identity. This is in line with recent call by various interest groups in Malaysia to focus more on model designs that capture contextual an...

  5. Improvement of Jet Breakup Model in Fuel Coolant Interactions

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Kwang Hyun; Kim, Kyung Kyu; Nam, Yang Ho [Korea Maritime Univ., Jinhae (Korea, Republic of)

    2007-02-15

    The objective of this work is to improve TRACER-II code in conjunction with the OECD SERENA project for validation of vapor explosion analysis codes. FCI breakup model is to be improved by building four-fluid multiphase flow model and existing models and experimental data are examined for the validation of the model. Four-fluid multiphase flow model has been built in TRACER-II code and jet breakup model has been included. Kelvin-Helmholtz instability is modelled for the jet side and boundary layer stripping is modelled for the jet leading edge. This work can contributes to the reduction of uncertainty in the FCI models for reactor safety analysis.

  6. Similar support for three different life course socioeconomic models on predicting premature cardiovascular mortality and all-cause mortality

    Directory of Open Access Journals (Sweden)

    Lynch John

    2006-08-01

    Full Text Available Abstract Background There are at least three broad conceptual models for the impact of the social environment on adult disease: the critical period, social mobility, and cumulative life course models. Several studies have shown an association between each of these models and mortality. However, few studies have investigated the importance of the different models within the same setting and none has been performed in samples of the whole population. The purpose of the present study was to study the relation between socioeconomic position (SEP and mortality using different conceptual models in the whole population of Scania. Methods In the present investigation we use socioeconomic information on all men (N = 48,909 and women (N = 47,688 born between 1945 and 1950, alive on January, 1st,1990, and living in the Region of Scania, in Sweden. Focusing on three specific life periods (i.e., ages 10–15, 30–35 and 40–45, we examined the association between SEP and the 12-year risk of premature cardiovascular mortality and all-cause mortality. Results There was a strong relation between SEP and mortality among those inside the workforce, irrespective of the conceptual model used. There was a clear upward trend in the mortality hazard rate ratios (HRR with accumulated exposure to manual SEP in both men (p for trend Conclusion There was a strong relation between SEP and cardiovascular and all-cause mortality, irrespective of the conceptual model used. The critical period, social mobility, and cumulative life course models, showed the same fit to the data. That is, one model could not be pointed out as "the best" model and even in this large unselected sample it was not possible to adjudicate which theories best describe the links between life course SEP and mortality risk.

  7. Improved Computational Model of Grid Cells Based on Column Structure

    Institute of Scientific and Technical Information of China (English)

    Yang Zhou; Dewei Wu; Weilong Li; Jia Du

    2016-01-01

    To simulate the firing pattern of biological grid cells, this paper presents an improved computational model of grid cells based on column structure. In this model, the displacement along different directions is processed by modulus operation, and the obtained remainder is associated with firing rate of grid cell. Compared with the original model, the improved parts include that: the base of modulus operation is changed, and the firing rate in firing field is encoded by Gaussian⁃like function. Simulation validates that the firing pattern generated by the improved computational model is more consistent with biological characteristic than original model. Besides, the firing pattern is badly influenced by the cumulative positioning error, but the computational model can also generate the regularly hexagonal firing pattern when the real⁃time positioning results are modified.

  8. Estimating similarity of XML Schemas using path similarity measure

    Directory of Open Access Journals (Sweden)

    Veena Trivedi

    2012-07-01

    Full Text Available In this paper, an attempt has been made to develop an algorithm which estimates the similarity for XML Schemas using multiple similarity measures. For performing the task, the XML Schema element information has been represented in the form of string and four different similarity measure approaches have been employed. To further improve the similarity measure, an overall similarity measure has also been calculated. The approach used in this paper is a distinguished one, as it calculates the similarity between two XML schemas using four approaches and gives an integrated values for the similarity measure. Keywords-componen

  9. SOME IMPROVEMENTS IN VISCO-PLASTIC MODEL CONSIDERING DYNAMIC RECRYSTALLIZATION

    Institute of Scientific and Technical Information of China (English)

    QU Jie; JIN Quanlin; XU Bingye

    2004-01-01

    Some improvements in Jin's thermal visco-plastic constitutive model considering dynamic recrysytallization is presented in this paper. By introducing the influence of the strain rate on the mobility of dynamic recovery, the improved model can be more smoothly applied to numerical simulation of material flow behaviour and microstructure prediction during hot working. Another improvement is to consider the accumulated dislocation energy in the newly recrystallized grains as a resistance to the driving force of dynamic recrystallization volume. This improvement makes the predicted results of dynamic recrystallization progress agree better with the actual physical process.Finally, some numerical examples are given to show the advantages of the improved model and the ability to predict the dynamic recrystallization.

  10. Matter of Similarity and Dissimilarity in Multi-Ethnic Society: A Model of Dyadic Cultural Norms Congruence

    Directory of Open Access Journals (Sweden)

    Abu Bakar Hassan

    2017-01-01

    Full Text Available Taking this into consideration of diver cultural norms in Malaysian workplace, the propose model explores Malaysia culture and identity with a backdrop of the pushes and pulls of ethnic diversity in a Malaysia. The model seeks to understand relational norm congruence based on multiethnic in Malaysia that will be enable us to identify Malaysia cultural and identity. This is in line with recent call by various interest groups in Malaysia to focus more on model designs that capture contextual and cultural factors that influences Malaysia culture and identity.

  11. Systematic improvement of molecular representations for machine learning models

    CERN Document Server

    Huang, Bing

    2016-01-01

    The predictive accuracy of Machine Learning (ML) models of molecular properties depends on the choice of the molecular representation. We introduce a hierarchy of representations based on uniqueness and target similarity criteria. To systematically control target similarity, we rely on interatomic many body expansions including Bonding, Angular, and higher order terms (BA). Addition of higher order contributions systematically increases similarity to the potential energy function as well as predictive accuracy of the resulting ML models. Numerical evidence is presented for the performance of BAML models trained on molecular properties pre-calculated at electron-correlated and density functional theory level of theory for thousands of small organic molecules. Properties studied include enthalpies and free energies of atomization, heatcapacity, zero-point vibrational energies, dipole-moment, polarizability, HOMO/LUMO energies and gap, ionization potential, electron affinity, and electronic excitations. After tr...

  12. Motivation to Improve Work through Learning: A Conceptual Model

    OpenAIRE

    Kueh Hua Ng; Rusli Ahmad

    2014-01-01

    This study aims to enhance our current understanding of the transfer of training by proposing a conceptual model that supports the mediating role of motivation to improve work through learning about the relationship between social support and the transfer of training. The examination of motivation to improve work through motivation to improve work through a learning construct offers a holistic view pertaining to a learner's profile in a workplace setting, which emphasizes learning for the imp...

  13. Modelling expertise at different levels of granularity using semantic similarity measures in the context of collaborative knowledge-curation platforms.

    Science.gov (United States)

    Ziaimatin, Hasti; Groza, Tudor; Tudorache, Tania; Hunter, Jane

    2016-12-01

    Collaboration platforms provide a dynamic environment where the content is subject to ongoing evolution through expert contributions. The knowledge embedded in such platforms is not static as it evolves through incremental refinements - or micro-contributions. Such refinements provide vast resources of tacit knowledge and experience. In our previous work, we proposed and evaluated a Semantic and Time-dependent Expertise Profiling (STEP) approach for capturing expertise from micro-contributions. In this paper we extend our investigation to structured micro-contributions that emerge from an ontology engineering environment, such as the one built for developing the International Classification of Diseases (ICD) revision 11. We take advantage of the semantically related nature of these structured micro-contributions to showcase two major aspects: (i) a novel semantic similarity metric, in addition to an approach for creating bottom-up baseline expertise profiles using expertise centroids; and (ii) the application of STEP in this new environment combined with the use of the same semantic similarity measure to both compare STEP against baseline profiles, as well as to investigate the coverage of these baseline profiles by STEP.

  14. Improved actions for the two-dimensional sigma-model

    OpenAIRE

    Caracciolo, Sergio; Montanari, Andrea; Pelissetto, Andrea

    1997-01-01

    For the O(N) sigma-model we studied the improvement program for actions with two- and four-spin interactions. An interesting example is an action which is reflection-positive, on-shell improved, and has all the coupling defined on an elementary plaquette. We show the large N solution and preliminary Monte Carlo results for N=3.

  15. Hypersonic Vehicle Tracking Based on Improved Current Statistical Model

    Directory of Open Access Journals (Sweden)

    He Guangjun

    2013-11-01

    Full Text Available A new method of tracking the near space hypersonic vehicle is put forward. According to hypersonic vehicles’ characteristics, we improved current statistical model through online identification of the maneuvering frequency. A Monte Carlo simulation is used to analyze the performance of the method. The results show that the improved method exhibits very good tracking performance in comparison with the old method.

  16. Application of Improved Grey Prediction Model to Petroleum Cost Forecasting

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The grey theory is a multidisciplinary and generic theory that deals with systems that lack adequate information and/or have only poor information. In this paper, an improved grey model using step function was proposed.Petroleum cost forecast of the Henan oil field was used as the case study to test the efficiency and accuracy of the proposed method. According to the experimental results, the proposed method obviously could improve the prediction accuracy of the original grey model.

  17. Self-similar decay to the marginally stable ground state in a model for film flow over inclined wavy bottoms

    Directory of Open Access Journals (Sweden)

    Tobias Hacker

    2012-04-01

    Full Text Available The integral boundary layer system (IBL with spatially periodic coefficients arises as a long wave approximation for the flow of a viscous incompressible fluid down a wavy inclined plane. The Nusselt-like stationary solution of the IBL is linearly at best marginally stable; i.e., it has essential spectrum at least up to the imaginary axis. Nevertheless, in this stable case we show that localized perturbations of the ground state decay in a self-similar way. The proof uses the renormalization group method in Bloch variables and the fact that in the stable case the Burgers equation is the amplitude equation for long waves of small amplitude in the IBL. It is the first time that such a proof is given for a quasilinear PDE with spatially periodic coefficients.

  18. Large mass self-similar solutions of the parabolic-parabolic Keller-Segel model of chemotaxis.

    Science.gov (United States)

    Biler, Piotr; Corrias, Lucilla; Dolbeault, Jean

    2011-07-01

    In two space dimensions, the parabolic-parabolic Keller-Segel system shares many properties with the parabolic-elliptic Keller-Segel system. In particular, solutions globally exist in both cases as long as their mass is less than a critical threshold M(c). However, this threshold is not as clear in the parabolic-parabolic case as it is in the parabolic-elliptic case, in which solutions with mass above M(c) always blow up. Here we study forward self-similar solutions of the parabolic-parabolic Keller-Segel system and prove that, in some cases, such solutions globally exist even if their total mass is above M(c), which is forbidden in the parabolic-elliptic case.

  19. A participatory model for improving occupational health and safety: improving informal sector working conditions in Thailand.

    Science.gov (United States)

    Manothum, Aniruth; Rukijkanpanich, Jittra; Thawesaengskulthai, Damrong; Thampitakkul, Boonwa; Chaikittiporn, Chalermchai; Arphorn, Sara

    2009-01-01

    The purpose of this study was to evaluate the implementation of an Occupational Health and Safety Management Model for informal sector workers in Thailand. The studied model was characterized by participatory approaches to preliminary assessment, observation of informal business practices, group discussion and participation, and the use of environmental measurements and samples. This model consisted of four processes: capacity building, risk analysis, problem solving, and monitoring and control. The participants consisted of four local labor groups from different regions, including wood carving, hand-weaving, artificial flower making, and batik processing workers. The results demonstrated that, as a result of applying the model, the working conditions of the informal sector workers had improved to meet necessary standards. This model encouraged the use of local networks, which led to cooperation within the groups to create appropriate technologies to solve their problems. The authors suggest that this model could effectively be applied elsewhere to improve informal sector working conditions on a broader scale.

  20. Improving Catastrophe Modeling for Business Interruption Insurance Needs.

    Science.gov (United States)

    Rose, Adam; Huyck, Charles K

    2016-10-01

    While catastrophe (CAT) modeling of property damage is well developed, modeling of business interruption (BI) lags far behind. One reason is the crude nature of functional relationships in CAT models that translate property damage into BI. Another is that estimating BI losses is more complicated because it depends greatly on public and private decisions during recovery with respect to resilience tactics that dampen losses by using remaining resources more efficiently to maintain business function and to recover more quickly. This article proposes a framework for improving hazard loss estimation for BI insurance needs. Improved data collection that allows for analysis at the level of individual facilities within a company can improve matching the facilities with the effectiveness of individual forms of resilience, such as accessing inventories, relocating operations, and accelerating repair, and can therefore improve estimation accuracy. We then illustrate the difference this can make in a case study example of losses from a hurricane.

  1. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  2. Using airborne geophysical surveys to improve groundwater resource management models

    Science.gov (United States)

    Abraham, Jared D.; Cannia, James C.; Peterson, Steven M.; Smith, Bruce D.; Minsley, Burke J.; Bedrosian, Paul A.

    2010-01-01

    Increasingly, groundwater management requires more accurate hydrogeologic frameworks for groundwater models. These complex issues have created the demand for innovative approaches to data collection. In complicated terrains, groundwater modelers benefit from continuous high‐resolution geologic maps and their related hydrogeologic‐parameter estimates. The USGS and its partners have collaborated to use airborne geophysical surveys for near‐continuous coverage of areas of the North Platte River valley in western Nebraska. The survey objectives were to map the aquifers and bedrock topography of the area to help improve the understanding of groundwater‐surface‐water relationships, leading to improved water management decisions. Frequency‐domain heliborne electromagnetic surveys were completed, using a unique survey design to collect resistivity data that can be related to lithologic information to refine groundwater model inputs. To render the geophysical data useful to multidimensional groundwater models, numerical inversion is necessary to convert the measured data into a depth‐dependent subsurface resistivity model. This inverted model, in conjunction with sensitivity analysis, geological ground truth (boreholes and surface geology maps), and geological interpretation, is used to characterize hydrogeologic features. Interpreted two‐ and three‐dimensional data coverage provides the groundwater modeler with a high‐resolution hydrogeologic framework and a quantitative estimate of framework uncertainty. This method of creating hydrogeologic frameworks improved the understanding of flow path orientation by redefining the location of the paleochannels and associated bedrock highs. The improved models reflect actual hydrogeology at a level of accuracy not achievable using previous data sets.

  3. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    Science.gov (United States)

    2015-09-30

    High-resolution simulations using nonhydrostatic models like SUNTANS are crucial for understanding multiscale processes that are unresolved, and...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Development of Improved Algorithms and Multiscale ... Modeling Capability with SUNTANS Oliver B. Fringer 473 Via Ortega, Room 187 Dept. of Civil and Environmental Engineering Stanford University

  4. Improved Modeling of Intelligent Tutoring Systems Using Ant Colony Optimization

    Science.gov (United States)

    Rastegarmoghadam, Mahin; Ziarati, Koorush

    2017-01-01

    Swarm intelligence approaches, such as ant colony optimization (ACO), are used in adaptive e-learning systems and provide an effective method for finding optimal learning paths based on self-organization. The aim of this paper is to develop an improved modeling of adaptive tutoring systems using ACO. In this model, the learning object is…

  5. The growth and the fluid dynamics of protein crystals and soft organic tissues: models and simulations, similarities and differences.

    Science.gov (United States)

    Lappa, Marcello

    2003-09-21

    The fluid-dynamic environment within typical growth reactors as well as the interaction of such flow with the intrinsic kinetics of the growth process are investigated in the frame of the new fields of protein crystal and tissue engineering. The paper uses available data to introduce a set of novel growth models. The surface conditions are coupled to the exchange mass flux at the specimen/culture-medium interface and lead to the introduction of a group of differential equations for the nutrient concentration around the sample and for the evolution of the construct mass displacement. These models take into account the sensitivity of the construct/liquid interface to the level of supersaturation in the case of macromolecular crystal growth and to the "direct" effect of the fluid-dynamic shear stress in the case of biological tissue growth. They then are used to show how the proposed surface kinetic laws can predict (through sophisticated numerical simulations) many of the known characteristics of protein crystals and biological tissues produced using well-known and widely used reactors. This procedure provides validation of the models and associated numerical method and at the same time gives insights into the mechanisms of the phenomena. The onset of morphological instabilities is discussed and investigated in detail. The interplay between the increasing size of the sample and the structure of the convective field established inside the reactor is analysed. It is shown that this interaction is essential in determining the time evolution of the specimen shape. Analogies about growing macromolecular crystals and growing biological tissues are pointed out in terms of behaviours and cause-and-effect relationships. These aspects lead to a common source (in terms of original mathematical models, ideas and results) made available for the scientific community under the optimistic idea that the contacts established between the "two fields of engineering" will develop into an

  6. Improving of local ozone forecasting by integrated models.

    Science.gov (United States)

    Gradišar, Dejan; Grašič, Boštjan; Božnar, Marija Zlata; Mlakar, Primož; Kocijan, Juš

    2016-09-01

    This paper discuss the problem of forecasting the maximum ozone concentrations in urban microlocations, where reliable alerting of the local population when thresholds have been surpassed is necessary. To improve the forecast, the methodology of integrated models is proposed. The model is based on multilayer perceptron neural networks that use as inputs all available information from QualeAria air-quality model, WRF numerical weather prediction model and onsite measurements of meteorology and air pollution. While air-quality and meteorological models cover large geographical 3-dimensional space, their local resolution is often not satisfactory. On the other hand, empirical methods have the advantage of good local forecasts. In this paper, integrated models are used for improved 1-day-ahead forecasting of the maximum hourly value of ozone within each day for representative locations in Slovenia. The WRF meteorological model is used for forecasting meteorological variables and the QualeAria air-quality model for gas concentrations. Their predictions, together with measurements from ground stations, are used as inputs to a neural network. The model validation results show that integrated models noticeably improve ozone forecasts and provide better alert systems.

  7. Phenotypic similarity of transmissible mink encephalopathy in cattle and L-type bovine spongiform encephalopathy in a mouse model.

    Science.gov (United States)

    Baron, Thierry; Bencsik, Anna; Biacabe, Anne-Gaëlle; Morignat, Eric; Bessen, Richard A

    2007-12-01

    Transmissible mink encepholapathy (TME) is a foodborne transmissible spongiform encephalopathy (TSE) of ranch-raised mink; infection with a ruminant TSE has been proposed as the cause, but the precise origin of TME is unknown. To compare the phenotypes of each TSE, bovine-passaged TME isolate and 3 distinct natural bovine spongiform encephalopathy (BSE) agents (typical BSE, H-type BSE, and L-type BSE) were inoculated into an ovine transgenic mouse line (TgOvPrP4). Transgenic mice were susceptible to infection with bovine-passaged TME, typical BSE, and L-type BSE but not to H-type BSE. Based on survival periods, brain lesions profiles, disease-associated prion protein brain distribution, and biochemical properties of protease-resistant prion protein, typical BSE had a distint phenotype in ovine transgenic mice compared to L-type BSE and bovine TME. The similar phenotypic properties of L-type BSE and bovine TME in TgOvPrP4 mice suggest that L-type BSE is a much more likely candidate for the origin of TME than is typical BSE.

  8. A procedure for Applying a Maturity Model to Process Improvement

    Directory of Open Access Journals (Sweden)

    Elizabeth Pérez Mergarejo

    2014-09-01

    Full Text Available A maturity model is an evolutionary roadmap for implementing the vital practices from one or moredomains of organizational process. The use of the maturity models is poor in the Latin-Americancontext. This paper presents a procedure for applying the Process and Enterprise Maturity Modeldeveloped by Michael Hammer [1]. The procedure is divided into three steps: Preparation, Evaluationand Improvement plan. The Hammer´s maturity model joint to the proposed procedure can be used byorganizations to improve theirs process, involving managers and employees.

  9. Process Correlation Analysis Model for Process Improvement Identification

    Directory of Open Access Journals (Sweden)

    Su-jin Choi

    2014-01-01

    software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  10. Improvement of a near wake model for trailing vorticity

    DEFF Research Database (Denmark)

    Pirrung, Georg; Hansen, Morten Hartvig; Aagaard Madsen, Helge

    2014-01-01

    to temporal discretization, both regarding numerical stability and quality of the results. The modified near wake model is coupled to an aerodynamics model, which consists of a blade element momentum model with dynamic inflow for the far wake and a 2D shed vorticity model that simulates the unsteady buildup......A near wake model, originally proposed by Beddoes, is further developed. The purpose of the model is to account for the radially dependent time constants of the fast aerodynamic response and to provide a tip loss correction. It is based on lifting line theory and models the downwash due to roughly...... the first 90 degrees of rotation. This restriction of the model to the near wake allows for using a computationally efficient indicial function algorithm. The aim of this study is to improve the accuracy of the downwash close to the root and tip of the blade and to decrease the sensitivity of the model...

  11. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  12. Improvements of the Analytical Model of Monte Carlo

    Institute of Scientific and Technical Information of China (English)

    HE Qing-Fang; XU Zheng; TENG Feng; LIU De-Ang; XU Xu-Rong

    2006-01-01

    @@ By extending the conduction band structure, we set up a new analytical model in ZnS. Compared the results with both the old analytical model and the full band model, it is found that they are possibly in reasonable agreement with the full band method and we can improve the calculation precision. Another important work is to reduce the programme computation time using the method of data fitting scattering rate curves.

  13. An improved market penetration model for wind energy technology forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Lund, P.D. [Helsinki Univ. of Technology, Espoo (Finland). Advanced Energy Systems

    1995-12-31

    An improved market penetration model with application to wind energy forecasting is presented. In the model, a technology diffusion model and manufacturing learning curve are combined. Based on a 85% progress ratio that was found for European wind manufactures and on wind market statistics, an additional wind power capacity of ca 4 GW is needed in Europe to reach a 30 % price reduction. A full breakthrough to low-cost utility bulk power markets could be achieved at a 24 GW level. (author)

  14. An improved equivalent circuit model of radial mode piezoelectric transformer.

    Science.gov (United States)

    Huang, Yihua; Huang, Wei

    2011-05-01

    In this paper, both the equivalent circuit models of the radial mode and the coupled thickness vibration mode of the radial mode piezoelectric transformer are deduced, and then with the Y-parameter matrix method and the dual-port network theory, an improved equivalent circuit model for the multilayer radial mode piezoelectric transformer is established. A radial mode transformer sample is tested to verify the equivalent circuit model. The experimental results show that the model proposed in this paper is more precise than the typical model.

  15. INTEGRATED COST MODEL FOR IMPROVING THE PRODUCTION IN COMPANIES

    Directory of Open Access Journals (Sweden)

    Zuzana Hajduova

    2014-12-01

    Full Text Available Purpose: All processes in the company play important role in ensuring functional integrated management system. We point out the importance of need for a systematic approach to the use of quantitative, but especially statistical methods for modelling the cost of the improvement activities that are part of an integrated management system. Development of integrated management systems worldwide leads towards building of systematic procedures of implementation maintenance and improvement of all systems according to the requirements of all the sides involved.Methodology: Statistical evaluation of the economic indicators of improvement costs and the need for a systematic approach to their management in terms of integrated management systems have become a key role also in the management of processes in the company Cu Drôt, a.s. The aim of this publication is to highlight the importance of proper implementation of statistical methods in the process of improvement costs management in the integrated management system of current market conditions and document the legitimacy of a systematic approach in the area of monitoring and analysing indicators of improvement with the aim of the efficient process management of company. We provide specific example of the implementation of appropriate statistical methods in the production of copper wire in a company Cu Drôt, a.s. This publication also aims to create a model for the estimation of integrated improvement costs, which through the use of statistical methods in the company Cu Drôt, a.s. is used to support decision-making on improving efficiency.Findings: In the present publication, a method for modelling the improvement process, by an integrated manner, is proposed. It is a method in which the basic attributes of the improvement in quality, safety and environment are considered and synergistically combined in the same improvement project. The work examines the use of sophisticated quantitative, especially

  16. Crop model improvement reduces the uncertainty of the response to temperature of multi-model ensembles

    DEFF Research Database (Denmark)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold

    2017-01-01

    To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of mo...

  17. The role of intergenerational similarity and parenting in adolescent self-criticism: An actor-partner interdependence model.

    Science.gov (United States)

    Bleys, Dries; Soenens, Bart; Boone, Liesbet; Claes, Stephan; Vliegen, Nicole; Luyten, Patrick

    2016-06-01

    Research investigating the development of adolescent self-criticism has typically focused on the role of either parental self-criticism or parenting. This study used an actor-partner interdependence model to examine an integrated theoretical model in which achievement-oriented psychological control has an intervening role in the relation between parental and adolescent self-criticism. Additionally, the relative contribution of both parents and the moderating role of adolescent gender were examined. Participants were 284 adolescents (M = 14 years, range = 12-16 years) and their parents (M = 46 years, range = 32-63 years). Results showed that only maternal self-criticism was directly related to adolescent self-criticism. However, both parents' achievement-oriented psychological control had an intervening role in the relation between parent and adolescent self-criticism in both boys and girls. Moreover, one parent's achievement-oriented psychological control was not predicted by the self-criticism of the other parent.

  18. "EII META-MODEL" ON INTEGRATION FRAMEWORK FOR VIABLE ENTERPRISE SYSTEMS - CITY PLANNING METAPHOR BASED ON STRUCTURAL SIMILARITY

    Institute of Scientific and Technical Information of China (English)

    Yukio NAMBA; Junichi IIJIMA

    2003-01-01

    Enterprise systems must have the structure to adapt the change of business environment. Whenrebuilding enterprise system to meet the extended operational boundaries, the concept of IT cityplanning is applicable and effective. The aim of this paper is to describe the architectural approachfrom the integrated information infrastructure (In3) standpoint and to propose for applying the "CityPlanning" concept for rebuilding "inter-application spaghetti" enterprise systems. This is mainlybecause the portion of infrastructure has increased with the change of information systems fromcentralized systems to distributed and open systems. As enterprise systems have involvedheterogeneity or architectural black box in them, it may be required the integration framework(meta-architecture) as a discipline based on heterogeneity that can provide comprehensive view of theenterprise systems. This paper proposes "EH Meta-model" as the integration framework that canoptimize the overall enterprise systems from the IT city planning point of view. EH Meta-modelconsists of "Integrated Information Infrastructure Map (In3-Map)", "Service Framework" and "ITScenario". It would be applicable and effective for the viable enterprise, because it has the mechanismto adapt the change. Finally, we illustrate a case of information system in an online securitiescompany and demonstrate applicability and effectiveness of EII Meta-model to meet their businessgoals.

  19. Unbound position II in MXCXXC metallochaperone model peptides impacts metal binding mode and reactivity: Distinct similarities to whole proteins.

    Science.gov (United States)

    Shoshan, Michal S; Dekel, Noa; Goch, Wojciech; Shalev, Deborah E; Danieli, Tsafi; Lebendiker, Mario; Bal, Wojciech; Tshuva, Edit Y

    2016-06-01

    The effect of position II in the binding sequence of copper metallochaperones, which varies between Thr and His, was investigated through structural analysis and affinity and oxidation kinetic studies of model peptides. A first Cys-Cu(I)-Cys model obtained for the His peptide at acidic and neutral pH, correlated with higher affinity and more rapid oxidation of its complex; in contrast, the Thr peptide with the Cys-Cu(I)-Met coordination under neutral conditions demonstrated weaker and pH dependent binding. Studies with human antioxidant protein 1 (Atox1) and three of its mutants where S residues were replaced with Ala suggested that (a) the binding affinity is influenced more by the binding sequence than by the protein fold (b) pH may play a role in binding reactivity, and (c) mutating the Met impacted the affinity and oxidation rate more drastically than did mutating one of the Cys, supporting its important role in protein function. Position II thus plays a dominant role in metal binding and transport.

  20. Improved Systematic Pointing Error Model for the DSN Antennas

    Science.gov (United States)

    Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

    2011-01-01

    New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

  1. Process correlation analysis model for process improvement identification.

    Science.gov (United States)

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  2. Theoretical Calculations and Modeling for the Molecular Polarization of Furan and Thiophene under the Action of an Electric Field Using Quantum Similarity

    Directory of Open Access Journals (Sweden)

    Alejandro Morales-Bayuelo

    2014-01-01

    Full Text Available A theoretical study on the molecular polarization of thiophene and furan under the action of an electric field using Local Quantum Similarity Indexes (LQSI was performed. This model is based on Hirshfeld partitioning of electron density within the framework of Density Functional Theory (DFT. Six local similarity indexes were used: overlap, overlap-interaction, coulomb, coulomb-interaction, Euclidian distances of overlap, and Euclidean distances of coulomb. In addition Topo-Geometrical Superposition Algorithm (TGSA was used as a method of alignment. This method provides a straightforward procedure to solve the problem of molecular relative orientation. It provides a tool to evaluate molecular quantum similarity, enabling the study of structural systems, which differ in only one atom such as thiophene and furan (point group C2v and cyclopentadienyl molecule (point group D5h. Additionally, this model can contribute to the interpretation of chemical bonds, and molecular interactions in the framework of the solvent effect theory.

  3. Election turnout statistics in many countries: similarities, differences, and a diffusive field model for decision-making.

    Science.gov (United States)

    Borghesi, Christian; Raynal, Jean-Claude; Bouchaud, Jean-Philippe

    2012-01-01

    We study in details the turnout rate statistics for 77 elections in 11 different countries. We show that the empirical results established in a previous paper for French elections appear to hold much more generally. We find in particular that the spatial correlation of turnout rates decay logarithmically with distance in all cases. This result is quantitatively reproduced by a decision model that assumes that each voter makes his mind as a result of three influence terms: one totally idiosyncratic component, one city-specific term with short-ranged fluctuations in space, and one long-ranged correlated field which propagates diffusively in space. A detailed analysis reveals several interesting features: for example, different countries have different degrees of local heterogeneities and seem to be characterized by a different propensity for individuals to conform to the cultural norm. We furthermore find clear signs of herding (i.e., strongly correlated decisions at the individual level) in some countries, but not in others.

  4. Election turnout statistics in many countries: similarities, differences, and a diffusive field model for decision-making

    CERN Document Server

    Borghesi, Christian; Bouchaud, Jean-Philippe

    2012-01-01

    We study in details the turnout rate statistics for 77 elections in 11 different countries. We show that the empirical results established in a previous paper for French elections appear to hold much more generally. We find in particular that the spatial correlation of turnout rates decay logarithmically with distance in all cases. This result is quantitatively reproduced by a decision model that assumes that each voter makes his mind as a result of three influence terms: one totally idiosyncratic component, one city-specific term with short-ranged fluctuations in space, and one long-ranged correlated field which propagates diffusively in space. A detailed analysis reveals several interesting features: for example, different countries have different degrees of local heterogeneities and seem to be characterized by a different propensity for individuals to conform to the cultural norm. We furthermore find clear signs of herding (i.e. strongly correlated decisions at the individual level) in some countries, but ...

  5. Kinetic models in industrial biotechnology - Improving cell factory performance.

    Science.gov (United States)

    Almquist, Joachim; Cvijovic, Marija; Hatzimanikatis, Vassily; Nielsen, Jens; Jirstrand, Mats

    2014-07-01

    An increasing number of industrial bioprocesses capitalize on living cells by using them as cell factories that convert sugars into chemicals. These processes range from the production of bulk chemicals in yeasts and bacteria to the synthesis of therapeutic proteins in mammalian cell lines. One of the tools in the continuous search for improved performance of such production systems is the development and application of mathematical models. To be of value for industrial biotechnology, mathematical models should be able to assist in the rational design of cell factory properties or in the production processes in which they are utilized. Kinetic models are particularly suitable towards this end because they are capable of representing the complex biochemistry of cells in a more complete way compared to most other types of models. They can, at least in principle, be used to in detail understand, predict, and evaluate the effects of adding, removing, or modifying molecular components of a cell factory and for supporting the design of the bioreactor or fermentation process. However, several challenges still remain before kinetic modeling will reach the degree of maturity required for routine application in industry. Here we review the current status of kinetic cell factory modeling. Emphasis is on modeling methodology concepts, including model network structure, kinetic rate expressions, parameter estimation, optimization methods, identifiability analysis, model reduction, and model validation, but several applications of kinetic models for the improvement of cell factories are also discussed.

  6. Improvement of TNO type trailing edge noise models

    DEFF Research Database (Denmark)

    Fischer, Andreas; Bertagnolio, Franck; Aagaard Madsen, Helge

    2016-01-01

    The paper describes an improvement of the so-called TNO model to predict the noise emission from aerofoil sections due to the interaction of the boundary layer turbulence with the trailing edge. The surface pressure field close to the trailing edge acts as source of sound in the TNO model....... It is computed by solving a Poisson equation which includes flow turbulence cross correlation terms. Previously published TNO type models used the assumption of Blake to simplify the Poisson equation. This paper shows that the simplification should not be used. We present a new model which fully models...... the turbulence cross correlation terms. The predictions of the new model are in better agreement with measurements of the surface pressure and far field sound spectra. The computational cost of the new model is only slightly higher than the one of the TNO model, because we derived an analytical solution...

  7. Inspiration or deflation? Feeling similar or dissimilar to slim and plus-size models affects self-evaluation of restrained eaters.

    Science.gov (United States)

    Papies, Esther K; Nicolaije, Kim A H

    2012-01-01

    The present studies examined the effect of perceiving images of slim and plus-size models on restrained eaters' self-evaluation. While previous research has found that such images can lead to either inspiration or deflation, we argue that these inconsistencies can be explained by differences in perceived similarity with the presented model. The results of two studies (ns=52 and 99) confirmed this and revealed that restrained eaters with high (low) perceived similarity to the model showed more positive (negative) self-evaluations when they viewed a slim model, compared to a plus-size model. In addition, Study 2 showed that inducing in participants a similarities mindset led to more positive self-evaluations after viewing a slim compared to a plus-size model, but only among restrained eaters with a relatively high BMI. These results are discussed in the context of research on social comparison processes and with regard to interventions for protection against the possible detrimental effects of media images.

  8. Linear and undulating periodized strength plus aerobic training promote similar benefits and lead to improvement of insulin resistance on obese adolescents.

    Science.gov (United States)

    Inoue, Daniela Sayuri; De Mello, Marco Túlio; Foschini, Denis; Lira, Fabio Santos; De Piano Ganen, Aline; Da Silveira Campos, Raquel Munhoz; De Lima Sanches, Priscila; Silva, Patrícia Leão; Corgosinho, Flávia Campos; Rossi, Fabrício Eduardo; Tufik, Sergio; Dâmaso, Ana R

    2015-03-01

    The present study compares the effectiveness of three types of physical training for obesity control in adolescents submitted to a long-term interdisciplinary therapy. Forty-five post-puberty obese adolescents (15-18yo) were randomly placed in three different groups of physical trainings: aerobic training (AT n=20), aerobic plus strength training with linear periodization (LP n=13) and aerobic plus strength training with daily undulating periodization (DUP n=12). The body composition was evaluated by air-displacement plethysmography; the rest metabolic rate was measured by indirect calorimetry; serum analysis was collected after an overnight fasting. The most important finding of this study was that both LP and DUP groups improved lipid profile, insulin sensitivity and adiponectin concentration (pstrength training were more effective than only aerobic training to improve lipid profile and insulin sensitivity, as well as the inflammatory state by increasing adiponectin. In all groups were observed an improvement on anthropometric profile. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  10. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  11. An integrated model for continuous quality improvement and productivity improvement in health services organizations.

    Science.gov (United States)

    Rakich, J S; Darr, K; Longest, B B

    1993-01-01

    The health services paradigm with respect to quality has shifted to that of conformance to requirements (the absence of defects) and fitness for use (meeting customer expectations and needs). This article presents an integrated model of continuous quality improvement (CQI) (often referred to as total quality management) and productivity improvement for health services organizations. It incorporates input-output theory and focuses on the CQI challenge--"How can we be certain that we do the right things right the first time, every time?" The twin pillars of CQI are presented. Achievement of both will result in productivity improvement and enhancement of the health services organization's competitive position.

  12. Improving the representation of hydrologic processes in Earth System Models

    Energy Technology Data Exchange (ETDEWEB)

    Clark, Martyn P. [National Center for Atmospheric Research, Boulder Colorado USA; Fan, Ying [Department of Earth and Planetary Sciences, Rutgers University, New Brunswick New Jersey USA; Lawrence, David M. [National Center for Atmospheric Research, Boulder Colorado USA; Adam, Jennifer C. [Department of Civil and Environmental Engineering, Washington State University, Pullman Washington USA; Bolster, Diogo [Department of Civil & Environmental Engineering and Earth Sciences, University of Notre Dame, South Bend Indiana USA; Gochis, David J. [National Center for Atmospheric Research, Boulder Colorado USA; Hooper, Richard P. [The Consortium of Universities for the Advancement of Hydrologic Science, Inc.; Kumar, Mukesh [Nichols Schools of Environment, Duke University, Durham North Carolina USA; Leung, L. Ruby [Pacific Northwest National Laboratory, Richland Washington USA; Mackay, D. Scott [Department of Geography, University at Buffalo, State University of New York, Buffalo New York USA; Maxwell, Reed M. [Department of Geology and Geological Engineering, Colorado School of Mines, Golden Colorado USA; Shen, Chaopeng [Department of Civil and Environmental Engineering, Pennsylvania State University, State College Pennsylvania USA; Swenson, Sean C. [National Center for Atmospheric Research, Boulder Colorado USA; Zeng, Xubin [Department of Atmospheric Sciences, University of Arizona, Tucson Arizona USA

    2015-08-21

    Many of the scientific and societal challenges in understanding and preparing for global environmental change rest upon our ability to understand and predict the water cycle change at large river basin, continent, and global scales. However, current large-scale models, such as the land components of Earth System Models (ESMs), do not yet represent the terrestrial water cycle in a fully integrated manner or resolve the finer-scale processes that can dominate large-scale water budgets. This paper reviews the current representation of hydrologic processes in ESMs and identifies the key opportunities for improvement. This review suggests that (1) the development of ESMs has not kept pace with modeling advances in hydrology, both through neglecting key processes (e.g., groundwater) and neglecting key aspects of spatial variability and hydrologic connectivity; and (2) many modeling advances in hydrology can readily be incorporated into ESMs and substantially improve predictions of the water cycle. Accelerating modeling advances in ESMs requires comprehensive hydrologic benchmarking activities, in order to systematically evaluate competing modeling alternatives, understand model weaknesses, and prioritize model development needs. This demands stronger collaboration, both through greater engagement of hydrologists in ESM development and through more detailed evaluation of ESM processes in research watersheds. Advances in the representation of hydrologic process in ESMs can substantially improve energy, carbon and nutrient cycle prediction capabilities through the fundamental role the water cycle plays in regulating these cycles.

  13. Improved Bounded Model Checking for the Universal Fragment of CTL

    Institute of Scientific and Technical Information of China (English)

    Liang Xu; Wei Chen; Yan-Yan Xu; Wen-Hui Zhang

    2009-01-01

    SAT-based bounded model checking (BMC) has been introduced as a complementary technique to BDD-based symbolic model checking in recent years, and a lot of successful work has been done in this direction. The approach was first introduced by A. Biere et al. in checking linear temporal logic (LTL) formulae and then also adapted to check formulae of the universal fragment of computation tree logic (ACTL) by W. Penczek et al. As the efficiency of model checking is still an important issue, we present an improved BMC approach for ACTL based on Penczek's method. We consider two aspects of the approach. One is reduction of the number of variables and transitions in the k-model by distinguishing the temporal operator EX from the others. The other is simplification of the transformation of formulae by using uniform path encoding instead of a disjunction of all paths needed in the k-model. With these improvements, for an ACTI, formula, the length of the final encoding of the formula in the worst case is reduced. The improved approach is implemented in the tool BMV and is compared with the original one by applying both to two well known examples, mutual exclusion and dining philosophers. The comparison shows the advantages of the improved approach with respect to the efficiency of model checking.

  14. Similarities and differences of serotonin and its precursors in their interactions with model membranes studied by molecular dynamics simulation

    Science.gov (United States)

    Wood, Irene; Martini, M. Florencia; Pickholz, Mónica

    2013-08-01

    In this work, we report a molecular dynamics (MD) simulations study of relevant biological molecules as serotonin (neutral and protonated) and its precursors, tryptophan and 5-hydroxy-tryptophan, in a fully hydrated bilayer of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidyl-choline (POPC). The simulations were carried out at the fluid lamellar phase of POPC at constant pressure and temperature conditions. Two guest molecules of each type were initially placed at the water phase. We have analyzed, the main localization, preferential orientation and specific interactions of the guest molecules within the bilayer. During the simulation run, the four molecules were preferentially found at the water-lipid interphase. We found that the interactions that stabilized the systems are essentially hydrogen bonds, salt bridges and cation-π. None of the guest molecules have access to the hydrophobic region of the bilayer. Besides, zwitterionic molecules have access to the water phase, while protonated serotonin is anchored in the interphase. Even taking into account that these simulations were done using a model membrane, our results suggest that the studied molecules could not cross the blood brain barrier by diffusion. These results are in good agreement with works that show that serotonin and Trp do not cross the BBB by simple diffusion.

  15. Election turnout statistics in many countries: similarities, differences, and a diffusive field model for decision-making.

    Directory of Open Access Journals (Sweden)

    Christian Borghesi

    Full Text Available We study in details the turnout rate statistics for 77 elections in 11 different countries. We show that the empirical results established in a previous paper for French elections appear to hold much more generally. We find in particular that the spatial correlation of turnout rates decay logarithmically with distance in all cases. This result is quantitatively reproduced by a decision model that assumes that each voter makes his mind as a result of three influence terms: one totally idiosyncratic component, one city-specific term with short-ranged fluctuations in space, and one long-ranged correlated field which propagates diffusively in space. A detailed analysis reveals several interesting features: for example, different countries have different degrees of local heterogeneities and seem to be characterized by a different propensity for individuals to conform to the cultural norm. We furthermore find clear signs of herding (i.e., strongly correlated decisions at the individual level in some countries, but not in others.

  16. Improving evapotranspiration processes in distrubing hydrological models using Remote Sensing derived ET products.

    Science.gov (United States)

    Abitew, T. A.; van Griensven, A.; Bauwens, W.

    2015-12-01

    Evapotranspiration is the main process in hydrology (on average around 60%), though has not received as much attention in the evaluation and calibration of hydrological models. In this study, Remote Sensing (RS) derived Evapotranspiration (ET) is used to improve the spatially distributed processes of ET of SWAT model application in the upper Mara basin (Kenya) and the Blue Nile basin (Ethiopia). The RS derived ET data is obtained from recently compiled global datasets (continuously monthly data at 1 km resolution from MOD16NBI,SSEBop,ALEXI,CMRSET models) and from regionally applied Energy Balance Models (for several cloud free days). The RS-RT data is used in different forms: Method 1) to evaluate spatially distributed evapotransiration model resultsMethod 2) to calibrate the evotranspiration processes in hydrological modelMethod 3) to bias-correct the evapotranpiration in hydrological model during simulation after changing the SWAT codesAn inter-comparison of the RS-ET products shows that at present there is a significant bias, but at the same time an agreement on the spatial variability of ET. The ensemble mean of different ET products seems the most realistic estimation and was further used in this study.The results show that:Method 1) the spatially mapped evapotranspiration of hydrological models shows clear differences when compared to RS derived evapotranspiration (low correlations). Especially evapotranspiration in forested areas is strongly underestimated compared to other land covers.Method 2) Calibration allows to improve the correlations between the RS and hydrological model results to some extent.Method 3) Bias-corrections are efficient in producing (sesonal or annual) evapotranspiration maps from hydrological models which are very similar to the patterns obtained from RS data.Though the bias-correction is very efficient, it is advised to improve the model results by better representing the ET processes by improved plant/crop computations, improved

  17. An improved car-following model considering relative velocity fluctuation

    Science.gov (United States)

    Yu, Shaowei; Shi, Zhongke

    2016-07-01

    To explore and evaluate the impacts of relative velocity fluctuation on the dynamic characteristics and fuel consumptions of traffic flow, we present an improved car-following model considering relative velocity fluctuation based on the full velocity difference model, then we carry out several numerical simulations to determine the optimal time window length and to explore how relative velocity fluctuation affects cars' velocity and its fluctuation as well as fuel consumptions. It can be found that the improved car-following model can describe the phase transition of traffic flow and estimate the evolution of traffic congestion, and that taking relative velocity fluctuation into account in designing the advanced adaptive cruise control strategy can improve the traffic flow stability and reduce fuel consumptions.

  18. Perinatal administration of aromatase inhibitors in rodents as animal models of human male homosexuality: similarities and differences.

    Science.gov (United States)

    Olvera-Hernández, Sandra; Fernández-Guasti, Alonso

    2015-01-01

    In this chapter we briefly review the evidence supporting the existence of biological influences on sexual orientation. We focus on basic research studies that have affected the estrogen synthesis during the critical periods of brain sexual differentiation in male rat offspring with the use of aromatase inhibitors, such as 1,4,6-androstatriene-3,17 (ATD) and letrozole. The results after prenatal and/or postnatal treatment with ATD reveal that these animals, when adults, show female sexual responses, such as lordosis or proceptive behaviors, but retain their ability to display male sexual activity with a receptive female. Interestingly, the preference and sexual behavior of these rats vary depending upon the circadian rhythm.Recently, we have established that the treatment with low doses of letrozole during the second half of pregnancy produces male rat offspring, that when adults spend more time in the company of a sexually active male than with a receptive female in a preference test. In addition, they display female sexual behavior when forced to interact with a sexually experienced male and some typical male sexual behavior when faced with a sexually receptive female. Interestingly, these males displayed both sexual behavior patterns spontaneously, i.e., in absence of exogenous steroid hormone treatment. Most of these features correspond with those found in human male homosexuals; however, the "bisexual" behavior shown by the letrozole-treated rats may be related to a particular human population. All these data, taken together, permit to propose letrozole prenatal treatment as a suitable animal model to study human male homosexuality and reinforce the hypothesis that human sexual orientation is underlied by changes in the endocrine milieu during early development.

  19. Multi-Layer Identification of Highly-Potent ABCA1 Up-Regulators Targeting LXRβ Using Multiple QSAR Modeling, Structural Similarity Analysis, and Molecular Docking

    Directory of Open Access Journals (Sweden)

    Meimei Chen

    2016-11-01

    Full Text Available In this study, in silico approaches, including multiple QSAR modeling, structural similarity analysis, and molecular docking, were applied to develop QSAR classification models as a fast screening tool for identifying highly-potent ABCA1 up-regulators targeting LXRβ based on a series of new flavonoids. Initially, four modeling approaches, including linear discriminant analysis, support vector machine, radial basis function neural network, and classification and regression trees, were applied to construct different QSAR classification models. The statistics results indicated that these four kinds of QSAR models were powerful tools for screening highly potent ABCA1 up-regulators. Then, a consensus QSAR model was developed by combining the predictions from these four models. To discover new ABCA1 up-regulators at maximum accuracy, the compounds in the ZINC database that fulfilled the requirement of structural similarity of 0.7 compared to known potent ABCA1 up-regulator were subjected to the consensus QSAR model, which led to the discovery of 50 compounds. Finally, they were docked into the LXRβ binding site to understand their role in up-regulating ABCA1 expression. The excellent binding modes and docking scores of 10 hit compounds suggested they were highly-potent ABCA1 up-regulators targeting LXRβ. Overall, this study provided an effective strategy to discover highly potent ABCA1 up-regulators.

  20. Quarkonium Production in an Improved Color Evaporation Model

    CERN Document Server

    Ma, Yan-Qing

    2016-01-01

    We propose an improved version of the color evaporation model to describe heavy quarkonium production. In contrast to the traditional color evaporation model, we impose the constraint that the invariant mass of the intermediate heavy quark-antiquark pair to be larger than the mass of produced quarkonium. We also introduce a momentum shift between heavy quark-antiquark pair and the quarkonium. Numerical calculations show that our model can describe the charmonium yields as well as ratio of $\\psi^\\prime$ over $J/\\psi$ better than the traditional color evaporation model.

  1. Reranking candidate gene models with cross-species comparison for improved gene prediction

    Directory of Open Access Journals (Sweden)

    Pereira Fernando CN

    2008-10-01

    Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.

  2. Implementing a business improvement model based on integrated plant information

    Directory of Open Access Journals (Sweden)

    Swanepoel, Hendrika Francina

    2016-11-01

    Full Text Available The World Energy Council defines numerous challenges in the global energy arena that put pressure on owners and /operators to operate run existing plant better and more efficiently. As such there is an increasing focus on the use of business and technical plant information and data to make better, more integrated, and more informed decisions on the plant. The research study developed a business improvement model (BIM that can be used to establish an integrated plant information management infrastructure as the core foundation for of business improvement initiatives. Operational research then demonstrated how this BIM approach could be successfully implemented to improve business operations and provide decision-making insight.

  3. COMPUTER- AIDED MODELING AND IMPROVING OF RISOGRAPH PRINTING

    Directory of Open Access Journals (Sweden)

    P. E. Sulim

    2014-01-01

    Full Text Available The considered improvement of qualit y of the risofraph print based on a mathematical model in the environment Matlab by using the specialized algorithms and digital filter of the Image Processing Toolbox. Use the model of screen printing in Matlab environment for risograph provide an opportunit y to improve the qualit y of prints by adjusting profile risograph to a specific view and the t ype of digital image. The use of the proposed technology will reduce the flow of the film and the paint by eliminating printing test prints and reducing the time spent printing.

  4. Improvement of Continuous Hydrologic Models and HMS SMA Parameters Reduction

    Science.gov (United States)

    Rezaeian Zadeh, Mehdi; Zia Hosseinipour, E.; Abghari, Hirad; Nikian, Ashkan; Shaeri Karimi, Sara; Moradzadeh Azar, Foad

    2010-05-01

    Hydrological models can help us to predict stream flows and associated runoff volumes of rainfall events within a watershed. There are many different reasons why we need to model the rainfall-runoff processes of for a watershed. However, the main reason is the limitation of hydrological measurement techniques and the costs of data collection at a fine scale. Generally, we are not able to measure all that we would like to know about a given hydrological systems. This is very particularly the case for ungauged catchments. Since the ultimate aim of prediction using models is to improve decision-making about a hydrological problem, therefore, having a robust and efficient modeling tool becomes an important factor. Among several hydrologic modeling approaches, continuous simulation has the best predictions because it can model dry and wet conditions during a long-term period. Continuous hydrologic models, unlike event based models, account for a watershed's soil moisture balance over a long-term period and are suitable for simulating daily, monthly, and seasonal streamflows. In this paper, we describe a soil moisture accounting (SMA) algorithm added to the hydrologic modeling system (HEC-HMS) computer program. As is well known in the hydrologic modeling community one of the ways for improving a model utility is the reduction of input parameters. The enhanced model developed in this study is applied to Khosrow Shirin Watershed, located in the north-west part of Fars Province in Iran, a data limited watershed. The HMS SMA algorithm divides the potential path of rainfall onto a watershed into five zones. The results showed that the output of HMS SMA is insensitive with the variation of many parameters such as soil storage and soil percolation rate. The study's objective is to remove insensitive parameters from the model input using Multi-objective sensitivity analysis. Keywords: Continuous Hydrologic Modeling, HMS SMA, Multi-objective sensitivity analysis, SMA Parameters

  5. Improved Generalized Force Model considering the Comfortable Driving Behavior

    Directory of Open Access Journals (Sweden)

    De-Jie Xu

    2015-01-01

    Full Text Available This paper presents an improved generalized force model (IGFM that considers the driver’s comfortable driving behavior. Through theoretical analysis, we propose the calculation methods of comfortable driving distance and velocity. Then the stability condition of the model is obtained by the linear stability analysis. The problems of the unrealistic acceleration of the leading car existing in the previous models were solved. Furthermore, the simulation results show that IGFM can predict correct delay time of car motion and kinematic wave speed at jam density, and it can exactly describe the driver’s behavior under an urgent case, where no collision occurs. The dynamic properties of IGFM also indicate that stability has improved compared to the generalized force model.

  6. Improved Testing and Specifivations of Smooth Transition Regression Models

    OpenAIRE

    Escribano, Álvaro; Jordá, Óscar

    1997-01-01

    This paper extends previous work in Escribano and Jordá (1997)and introduces new LM specification procedures to choose between Logistic and Exponential Smooth Transition Regression (STR)Models. These procedures are simpler, consistent and more powerful than those previously available in the literature. An analysis of the properties of Taylor approximations around the transition function of STR models permits one to understand why these procedures work better and it suggests ways to improve te...

  7. An improved turbulence model for rotating shear flows*

    Science.gov (United States)

    Nagano, Yasutaka; Hattori, Hirofumi

    2002-01-01

    In the present study, we construct a turbulence model based on a low-Reynolds-number non-linear k e model for turbulent flows in a rotating channel. Two-equation models, in particular the non-linear k e model, are very effective for solving various flow problems encountered in technological applications. In channel flows with rotation, however, the explicit effects of rotation only appear in the Reynolds stress components. The exact equations for k and e do not have any explicit terms concerned with the rotation effects. Moreover, the Coriolis force vanishes in the momentum equation for a fully developed channel flow with spanwise rotation. Consequently, in order to predict rotating channel flows, after proper revision the Reynolds stress equation model or the non-linear eddy viscosity model should be used. In this study, we improve the non-linear k e model so as to predict rotating channel flows. In the modelling, the wall-limiting behaviour of turbulence is also considered. First, we evaluated the non-linear k e model using the direct numerical simulation (DNS) database for a fully developed rotating turbulent channel flow. Next, we assessed the non-linear k e model at various rotation numbers. Finally, on the basis of these assessments, we reconstruct the non-linear k e model to calculate rotating shear flows, and the proposed model is tested on various rotation number channel flows. The agreement with DNS and experiment data is quite satisfactory.

  8. An Improved QTM Subdivision Model with Approximate Equal-area

    Directory of Open Access Journals (Sweden)

    ZHAO Xuesheng

    2016-01-01

    Full Text Available To overcome the defect of large area deformation in the traditional QTM subdivision model, an improved subdivision model is proposed which based on the “parallel method” and the thought of the equal area subdivision with changed-longitude-latitude. By adjusting the position of the parallel, this model ensures that the grid area between two adjacent parallels combined with no variation, so as to control area variation and variation accumulation of the QTM grid. The experimental results show that this improved model not only remains some advantages of the traditional QTM model(such as the simple calculation and the clear corresponding relationship with longitude/latitude grid, etc, but also has the following advantages: ①this improved model has a better convergence than the traditional one. The ratio of area_max/min finally converges to 1.38, far less than 1.73 of the “parallel method”; ②the grid units in middle and low latitude regions have small area variations and successive distributions; meanwhile, with the increase of subdivision level, the grid units with large variations gradually concentrate to the poles; ③the area variation of grid unit will not cumulate with the increasing of subdivision level.

  9. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby

    2017-03-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the laser-induced breakdown spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element's emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple "sub-model" method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then "blending" these "sub-models" into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares (PLS) regression, is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  10. An improved optimal elemental method for updating finite element models

    Institute of Scientific and Technical Information of China (English)

    Duan Zhongdong(段忠东); Spencer B.F.; Yan Guirong(闫桂荣); Ou Jinping(欧进萍)

    2004-01-01

    The optimal matrix method and optimal elemental method used to update finite element models may not provide accurate results. This situation occurs when the test modal model is incomplete, as is often the case in practice. An improved optimal elemental method is presented that defines a new objective function, and as a byproduct, circumvents the need for mass normalized modal shapes, which are also not readily available in practice. To solve the group of nonlinear equations created by the improved optimal method, the Lagrange multiplier method and Matlab function fmincon are employed. To deal with actual complex structures,the float-encoding genetic algorithm (FGA) is introduced to enhance the capability of the improved method. Two examples, a 7-degree of freedom (DOF) mass-spring system and a 53-DOF planar frame, respectively, are updated using the improved method.Thc example results demonstrate the advantages of the improved method over existing optimal methods, and show that the genetic algorithm is an effective way to update the models used for actual complex structures.

  11. Kefir drink causes a significant yet similar improvement in serum lipid profile, compared with low-fat milk, in a dairy-rich diet in overweight or obese premenopausal women: A randomized controlled trial.

    Science.gov (United States)

    Fathi, Yasamin; Ghodrati, Naeimeh; Zibaeenezhad, Mohammad-Javad; Faghih, Shiva

    Controversy exists as to whether the lipid-lowering properties of kefir drink (a fermented probiotic dairy product) in animal models could be replicated in humans. To assess and compare the potential lipid-lowering effects of kefir drink with low-fat milk in a dairy-rich diet in overweight or obese premenopausal women. In this 8-week, single-center, multiarm, parallel-group, outpatient, randomized controlled trial, 75 eligible Iranian women aged 25 to 45 years were randomly allocated to kefir, milk, or control groups. Women in the control group received a weight-maintenance diet containing 2 servings/d of low-fat dairy products, whereas subjects in the milk and kefir groups received a similar diet containing 2 additional servings/d (a total of 4 servings/d) of dairy products from low-fat milk or kefir drink, respectively. At baseline and study end point, serum levels/ratios of total cholesterol (TC), low- and high-density lipoprotein cholesterol (LDLC and HDLC), triglyceride, Non-HDLC, TC/HDLC, LDLC/HDLC, and triglyceride/LDLC were measured as outcome measures. After 8 weeks, subjects in the kefir group had significantly lower serum levels/ratios of lipoproteins than those in the control group (mean between-group differences were -10.4 mg/dL, -9.7 mg/dL, -11.5 mg/dL, -0.4, and -0.3 for TC, LDLC, non-HDLC, TC/HDLC, and LDLC/HDLC, respectively; all P kefir and milk groups. Kefir drink causes a significant yet similar improvement in serum lipid profile, compared with low-fat milk, in a dairy-rich diet in overweight or obese premenopausal women. Copyright © 2016 National Lipid Association. Published by Elsevier Inc. All rights reserved.

  12. An improved equivalent simulation model for CMOS integrated Hall plates.

    Science.gov (United States)

    Xu, Yue; Pan, Hong-Bin

    2011-01-01

    An improved equivalent simulation model for a CMOS-integrated Hall plate is described in this paper. Compared with existing models, this model covers voltage dependent non-linear effects, geometrical effects, temperature effects and packaging stress influences, and only includes a small number of physical and technological parameters. In addition, the structure of this model is relatively simple, consisting of a passive network with eight non-linear resistances, four current-controlled voltage sources and four parasitic capacitances. The model has been written in Verilog-A hardware description language and it performed successfully in a Cadence Spectre simulator. The model's simulation results are in good agreement with the classic experimental results reported in the literature.

  13. IMPROVEMENTS OF RIVER MODELING 1D DATA PREPARATION

    Directory of Open Access Journals (Sweden)

    ION-MARIAN MOISOIU

    2012-11-01

    Full Text Available Improvements of river modeling 1D data preparation. The importance of hydrographical networks data and the need for detailed studies do generate an increase of projects in this specialized area and a diversification of river mathematical modeling software. River mathematical modeling can be done in two ways, namely; the "2D mode" and the “1D mode”. The “2D mode” is where a digital terrain model of a full hydrographical basin must be produced and "1D mode" is where only cross sections, long sections and structures elevations needs to be presented in a graphical environment and in a specific formats for the mathematical modeling software. This paper will show the principle of a custom built GIS, specially created to help the preparation of 1D river modeling data. The benefits are; elimination of human errors, automated processing, increasing productivity, flexible output and cost reduction.

  14. Evaluation and improvement of the cloud resolving model component of the multi-scale modeling framework

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Kuan-Man; Cheng, Anning

    2009-10-01

    Developed, implemented and tested an improved Colorado State University (CSU) SAM (System for Atmospheric Modeling) cloud-resolving model (CRM) with the advanced third-order turbulence closure (IPHOC).

  15. Simulating urban expansion using an improved SLEUTH model

    Science.gov (United States)

    Liu, Xinsheng; Sun, Rui; Yang, Qingyun; Su, Guiwu; Qi, Wenhua

    2012-01-01

    Accelerated urbanization creates challenges of water shortages, air pollution, and reductions in green space. To address these issues, methods for assessing urban expansion with the goal of achieving reasonable urban growth should be explored. In this study, an improved slope, land use, exclusion, urban, transportation, hillshade (SLEUTH) cellular automata model is developed and applied to the city of Tangshan, China, for urban expansion research. There are three modifications intended to improve SLEUTH: first, the utilization of ant colony optimization to calibrate SLEUTH to simplify the calibration procedures and improve their efficiency; second, the introduction of subregional calibration to replace calibration of the entire study area; and third, the incorporation of social and economic data to adjust the self-modification rule of SLEUTH. The first two modifications improve the calibration accuracy and efficiency compared with the original SLEUTH. The third modification fails to improve SLEUTH, and further experiments are needed. Using the improvements to the SLEUTH model, forecasts of urban growth are performed for every year up to 2020 for the city of Tangshan under two scenarios: an inertia trend scenario and a policy-adjusted scenario.

  16. Guiding and Modelling Quality Improvement in Higher Education Institutions

    Science.gov (United States)

    Little, Daniel

    2015-01-01

    The article considers the process of creating quality improvement in higher education institutions from the point of view of current organisational theory and social-science modelling techniques. The author considers the higher education institution as a functioning complex of rules, norms and other organisational features and reviews the social…

  17. Improving Perovskite Solar Cells: Insights From a Validated Device Model

    NARCIS (Netherlands)

    Sherkar, Tejas S.; Momblona, Cristina; Gil-Escrig, Lidon; Bolink, Henk J.; Koster, L. Jan Anton

    2017-01-01

    To improve the efficiency of existing perovskite solar cells (PSCs), a detailed understanding of the underlying device physics during their operation is essential. Here, a device model has been developed and validated that describes the operation of PSCs and quantitatively explains the role of conta

  18. RDI Advising Model for Improving the Teaching-Learning Process

    Science.gov (United States)

    de la Fuente, Jesus; Lopez-Medialdea, Ana Maria

    2007-01-01

    Introduction: Advising in Educational Psychology from the perspective of RDI takes on a stronger investigative, innovative nature. The model proposed by De la Fuente et al (2006, 2007) and Education & Psychology (2007) was applied to the field of improving teaching-learning processes at a school. Hypotheses were as follows: (1) interdependence…

  19. Guiding and Modelling Quality Improvement in Higher Education Institutions

    Science.gov (United States)

    Little, Daniel

    2015-01-01

    The article considers the process of creating quality improvement in higher education institutions from the point of view of current organisational theory and social-science modelling techniques. The author considers the higher education institution as a functioning complex of rules, norms and other organisational features and reviews the social…

  20. Promoting Continuous Quality Improvement in Online Teaching: The META Model

    Science.gov (United States)

    Dittmar, Eileen; McCracken, Holly

    2012-01-01

    Experienced e-learning faculty members share strategies for implementing a comprehensive postsecondary faculty development program essential to continuous improvement of instructional skills. The high-impact META Model (centered around Mentoring, Engagement, Technology, and Assessment) promotes information sharing and content creation, and fosters…

  1. Improving Perovskite Solar Cells: Insights From a Validated Device Model

    NARCIS (Netherlands)

    Sherkar, Tejas S.; Momblona, Cristina; Gil-Escrig, Lidon; Bolink, Henk J.; Koster, L. Jan Anton

    2017-01-01

    To improve the efficiency of existing perovskite solar cells (PSCs), a detailed understanding of the underlying device physics during their operation is essential. Here, a device model has been developed and validated that describes the operation of PSCs and quantitatively explains the role of conta

  2. Improving Project Management Using Formal Models and Architectures

    Science.gov (United States)

    Kahn, Theodore; Sturken, Ian

    2011-01-01

    This talk discusses the advantages formal modeling and architecture brings to project management. These emerging technologies have both great potential and challenges for improving information available for decision-making. The presentation covers standards, tools and cultural issues needing consideration, and includes lessons learned from projects the presenters have worked on.

  3. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models

    Science.gov (United States)

    Marquette, Michele L.; Sognier, Marguerite A.

    2013-01-01

    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  4. ASTEC and ICARE/CATHARE modelling improvement for VVERs

    Energy Technology Data Exchange (ETDEWEB)

    Zvonarev, Yu [Russian Research Centre ' Kurchatov Institute' (RRC KI), NRI, Kurchatov Square 1, Moscow (Russian Federation); Volchek, A., E-mail: voltchek@nsi.kiae.r [Russian Research Centre ' Kurchatov Institute' (RRC KI), NRI, Kurchatov Square 1, Moscow (Russian Federation); Kobzar, V. [Russian Research Centre ' Kurchatov Institute' (RRC KI), NRI, Kurchatov Square 1, Moscow (Russian Federation); Chatelard, P.; Van Dorsselaere, J.P. [Institut de Radioprotection et de Surete Nucleaire (IRSN), Sadarache (France)

    2011-04-15

    ASTEC and ICARE/CATHARE computer codes, developed by IRSN (France) (the former with GRS, Germany), are used in RRC KI (Russia) for the analyses of accident transients on VVER-type NPPs. The latest versions of the codes were continuously improved and validated to provide a better understanding of the main processes during hypothetical severe accidents on VVERs. This paper describes modelling improvements for VVERs carried out recently in the ICARE common part of the above codes. These actions concern the important models of fuel rod cladding mechanical behaviour and oxidation in steam at high and very high temperatures. The existing models were improved basing on the experience in the field and latest literature data sources for Zr + 1%Nb material used for manufacture of VVERs fuel rod claddings. Best-fitted correlations for the Zr alloy oxidation through a broad temperature range were established, along with recommendations on model application in clad geometry and starvation conditions. A model for the creep velocity was chosen for the clad mechanical model and some cladding burst criteria were established as a function of temperature. After verification of modelling improvements on Separate Effect Tests, validation was carried out on integral bundle tests such as QUENCH, CODEX-CT, PARAMETER-SF (the application to the CORA-VVER experiments is not described in the present paper) and on the Paks-2 cleaning tank incident. The comparison of updated code results with experimental data demonstrated very good numerical predictions, which increases the level of code applicability to VVER-type materials.

  5. A model to design effective Production Improvement Programs

    Directory of Open Access Journals (Sweden)

    T. Bautista

    2010-04-01

    Full Text Available The objective of this paper is to present a model to design effective Production Improvement Programs (PIP in order tocontribute in the solution of the problematic situations generally faced by the Mexican manufacturing micro, small and mediumsizedenterprises (M‐SME. In this proposal, we imply that facilitating their development is a natural way to improve theirperformance, especially in terms of productive efficiency. The study picked up empirical evidence from the ProcessesReengineering Workshop (PRW, one of the leading services of the National Committee of Productivity and TechnologicalInnovation (NCPTI which is considered a Mexican successful case. We show through a comparative analysis that it is possible tohave better programs when they follow a continuous improvement process involving the owner of the firm and workforceparticipation. Furthermore, we suggest a series of methods for planning, structuring and improvement according to theimitative, tacit and qualitative M‐SME specific competence.

  6. Development of Improved Mechanistic Deterioration Models for Flexible Pavements

    DEFF Research Database (Denmark)

    Ullidtz, Per; Ertman, Hans Larsen

    1998-01-01

    The paper describes a pilot study in Denmark with the main objective of developing improved mechanistic deterioration models for flexible pavements based on an accelerated full scale test on an instrumented pavement in the Danish Road Tessting Machine. The study was the first in "International...... Pavement Subgrade Performance Study" sponsored by the Federal Highway Administration (FHWA), USA. The paper describes in detail the data analysis and the resulting models for rutting, roughness, and a model for the plastic strain in the subgrade.The reader will get an understanding of the work needed...

  7. Improved Propulsion Modeling for Low-Thrust Trajectory Optimization

    Science.gov (United States)

    Knittel, Jeremy M.; Englander, Jacob A.; Ozimek, Martin T.; Atchison, Justin A.; Gould, Julian J.

    2017-01-01

    Low-thrust trajectory design is tightly coupled with spacecraft systems design. In particular, the propulsion and power characteristics of a low-thrust spacecraft are major drivers in the design of the optimal trajectory. Accurate modeling of the power and propulsion behavior is essential for meaningful low-thrust trajectory optimization. In this work, we discuss new techniques to improve the accuracy of propulsion modeling in low-thrust trajectory optimization while maintaining the smooth derivatives that are necessary for a gradient-based optimizer. The resulting model is significantly more realistic than the industry standard and performs well inside an optimizer. A variety of deep-space trajectory examples are presented.

  8. A Cooperative Model to Improve Hospital Equipments and Drugs Management

    Science.gov (United States)

    Baffo, Ilaria; Confessore, Giuseppe; Liotta, Giacomo; Stecca, Giuseppe

    The cost of services provided by public and private healthcare systems is nowadays becoming critical. This work tackles the criticalities of hospital equipments and drugs management by emphasizing its implications on the whole healthcare system efficiency. The work presents a multi-agent based model for decisional cooperation in order to address the problem of integration of departments, wards and personnel for improving equipments, and drugs management. The proposed model faces the challenge of (i) gaining the benefits deriving from successful collaborative models already used in industrial systems and (ii) transferring the most appropriate industrial management practices to healthcare systems.

  9. Mining Object Similarity for Predicting Next Locations

    Institute of Scientific and Technical Information of China (English)

    Meng Chen; Xiaohui Yu; Yang Liu

    2016-01-01

    Next location prediction is of great importance for many location-based applications. With the virtue of solid theoretical foundations, Markov-based approaches have gained success along this direction. In this paper, we seek to enhance the prediction performance by understanding the similarity between objects. In particular, we propose a novel method, called weighted Markov model (weighted-MM), which exploits both the sequence of just-passed locations and the object similarity in mining the mobility patterns. To this end, we first train a Markov model for each object with its own trajectory records, and then quantify the similarities between different objects from two aspects: spatial locality similarity and trajectory similarity. Finally, we incorporate the object similarity into the Markov model by considering the similarity as the weight of the probability of reaching each possible next location, and return the top-rankings as results. We have conducted extensive experiments on a real dataset, and the results demonstrate significant improvements in prediction accuracy over existing solutions.

  10. Modeling solar cells: A method for improving their efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Morales-Acevedo, Arturo, E-mail: amorales@solar.cinvestav.mx [Centro de Investigacion y de Estudios Avanzados del IPN, Electrical Engineering Department, Avenida IPN No. 2508, 07360 Mexico, D.F. (Mexico); Hernandez-Como, Norberto; Casados-Cruz, Gaspar [Centro de Investigacion y de Estudios Avanzados del IPN, Electrical Engineering Department, Avenida IPN No. 2508, 07360 Mexico, D.F. (Mexico)

    2012-09-20

    After a brief discussion on the theoretical basis for simulating solar cells and the available programs for doing this we proceed to discuss two examples that show the importance of doing numerical simulation of solar cells. We shall concentrate in silicon Heterojunction Intrinsic Thin film aSi/cSi (HIT) and CdS/CuInGaSe{sub 2} (CIGS) solar cells. In the first case, we will show that numerical simulation indicates that there is an optimum transparent conducting oxide (TCO) to be used in contact with the p-type aSi:H emitter layer although many experimental researchers might think that the results can be similar without regard of the TCO film used. In this case, it is shown that high work function TCO materials such as ZnO:Al are much better than smaller work function films such as ITO. HIT solar cells made with small work function TCO layers (<4.8 eV) will never be able to reach the high efficiencies already reported experimentally. It will also be discussed that simulations of CIGS solar cells by different groups predict efficiencies around 18-19% or even less, i.e. below the record efficiency reported experimentally (20.3%). In addition, the experimental band-gap which is optimum in this case is around 1.2 eV while several theoretical results predict a higher optimum band-gap (1.4-1.5 eV). This means that there are other effects not included in most of the simulation models developed until today. One of them is the possible presence of an interfacial (inversion) layer between CdS and CIGS. It is shown that this inversion layer might explain the smaller observed optimum band-gap, but some efficiency is lost. It is discussed that another possible explanation for the higher experimental efficiency is the possible variation of Ga concentration in the CIGS film causing a gradual variation of the band-gap. This band-gap grading might help improve the open-circuit voltage and, if it is appropriately done, it can also cause the enhancement of the photo-current density.

  11. Improved environmental multimedia modeling and its sensitivity analysis.

    Science.gov (United States)

    Yuan, Jing; Elektorowicz, Maria; Chen, Zhi

    2011-01-01

    Modeling of multimedia environmental issues is extremely complex due to the intricacy of the systems with the consideration of many factors. In this study, an improved environmental multimedia modeling is developed and a number of testing problems related to it are examined and compared with each other with standard numerical and analytical methodologies. The results indicate the flux output of new model is lesser in the unsaturated zone and groundwater zone compared with the traditional environmental multimedia model. Furthermore, about 90% of the total benzene flux was distributed to the air zone from the landfill sources and only 10% of the total flux emitted into the unsaturated, groundwater zones in non-uniform conditions. This paper also includes functions of model sensitivity analysis to optimize model parameters such as Peclet number (Pe). The analyses results show that the Pe can be considered as deterministic input variables for transport output. The oscillatory behavior is eliminated with the Pe decreased. In addition, the numerical methods are more accurate than analytical methods with the Pe increased. In conclusion, the improved environmental multimedia model system and its sensitivity analysis can be used to address the complex fate and transport of the pollutants in multimedia environments and then help to manage the environmental impacts.

  12. On improving the communication between models and data.

    Science.gov (United States)

    Dietze, Michael C; Lebauer, David S; Kooper, Rob

    2013-09-01

    The potential for model-data synthesis is growing in importance as we enter an era of 'big data', greater connectivity and faster computation. Realizing this potential requires that the research community broaden its perspective about how and why they interact with models. Models can be viewed as scaffolds that allow data at different scales to inform each other through our understanding of underlying processes. Perceptions of relevance, accessibility and informatics are presented as the primary barriers to broader adoption of models by the community, while an inability to fully utilize the breadth of expertise and data from the community is a primary barrier to model improvement. Overall, we promote a community-based paradigm to model-data synthesis and highlight some of the tools and techniques that facilitate this approach. Scientific workflows address critical informatics issues in transparency, repeatability and automation, while intuitive, flexible web-based interfaces make running and visualizing models more accessible. Bayesian statistics provides powerful tools for assimilating a diversity of data types and for the analysis of uncertainty. Uncertainty analyses enable new measurements to target those processes most limiting our predictive ability. Moving forward, tools for information management and data assimilation need to be improved and made more accessible.

  13. Apparent Arctic sea ice modeling improvement caused by volcanoes

    CERN Document Server

    Rosenblum, Erica

    2016-01-01

    The downward trend in Arctic sea ice extent is one of the most dramatic signals of climate change during recent decades. Comprehensive climate models have struggled to reproduce this, typically simulating a slower rate of sea ice retreat than has been observed. However, this bias has been substantially reduced in models participating in the most recent phase of the Coupled Model Intercomparison Project (CMIP5) compared with the previous generation of models (CMIP3). This improvement has been attributed to improved physics in the models. Here we examine simulations from CMIP3 and CMIP5 and find that simulated sea ice trends are strongly influenced by historical volcanic forcing, which was included in all of the CMIP5 models but in only about half of the CMIP3 models. The volcanic forcing causes temporary simulated cooling in the 1980s and 1990s, which contributes to raising the simulated 1979-2013 global-mean surface temperature trends to values substantially larger than observed. This warming bias is accompan...

  14. Improvements on Semi-Classical Distorted-Wave model

    Energy Technology Data Exchange (ETDEWEB)

    Sun Weili; Watanabe, Y.; Kuwata, R. [Kyushu Univ., Fukuoka (Japan); Kohno, M.; Ogata, K.; Kawai, M.

    1998-03-01

    A method of improving the Semi-Classical Distorted Wave (SCDW) model in terms of the Wigner transform of the one-body density matrix is presented. Finite size effect of atomic nuclei can be taken into account by using the single particle wave functions for harmonic oscillator or Wood-Saxon potential, instead of those based on the local Fermi-gas model which were incorporated into previous SCDW model. We carried out a preliminary SCDW calculation of 160 MeV (p,p`x) reaction on {sup 90}Zr with the Wigner transform of harmonic oscillator wave functions. It is shown that the present calculation of angular distributions increase remarkably at backward angles than the previous ones and the agreement with the experimental data is improved. (author)

  15. An improved HCI degradation model for a VLSI MOSFET

    Institute of Scientific and Technical Information of China (English)

    Tang Yi; Wan Xinggong; Gu Xiang; Wang Wenyuan; Zhang Huirui; Liu Yuwei

    2009-01-01

    An improved hot carrier injection (HCI) degradation model was proposed based on interface trap gen-eration and oxide charge injection theory. It was evident that the degradation behavior of electric parameters such as I_(dlin), I_(dsat), G_m and V_t fitted well with this model. Devices were prepared with 0.35μm technology and different LDD processes, I_(dlin) and I_(dsat) after HCI stress were analyzed with the improved model. The effects of interface trap generation and oxide charge injection on device degradation were extracted, and the charge injection site could be obtained by this method. The work provides important information to device designers and process engineers.

  16. Correction, improvement and model verification of CARE 3, version 3

    Science.gov (United States)

    Rose, D. M.; Manke, J. W.; Altschul, R. E.; Nelson, D. L.

    1987-01-01

    An independent verification of the CARE 3 mathematical model and computer code was conducted and reported in NASA Contractor Report 166096, Review and Verification of CARE 3 Mathematical Model and Code: Interim Report. The study uncovered some implementation errors that were corrected and are reported in this document. The corrected CARE 3 program is called version 4. Thus the document, correction. improvement, and model verification of CARE 3, version 3 was written in April 1984. It is being published now as it has been determined to contain a more accurate representation of CARE 3 than the preceding document of April 1983. This edition supercedes NASA-CR-166122 entitled, 'Correction and Improvement of CARE 3,' version 3, April 1983.

  17. An improved model for TPV performance predictions and optimization

    Science.gov (United States)

    Schroeder, K. L.; Rose, M. F.; Burkhalter, J. E.

    1997-03-01

    Previously a model has been presented for calculating the performance of a TPV system. This model has been revised into a general purpose algorithm, improved in fidelity, and is presented here. The basic model is an energy based formulation and evaluates both the radiant and heat source elements of a combustion based system. Improvements in the radiant calculations include the use of ray tracking formulations and view factors for evaluating various flat plate and cylindrical configurations. Calculation of photocell temperature and performance parameters as a function of position and incident power have also been incorporated. Heat source calculations have been fully integrated into the code by the incorporation of a modified version of the NASA Complex Chemical Equilibrium Compositions and Applications (CEA) code. Additionally, coding has been incorporated to allow optimization of various system parameters and configurations. Several examples cases are presented and compared, and an optimum flat plate emitter/filter/photovoltaic configuration is also described.

  18. On the similarity in shape between debris-flow channels and high-gradient flood channels: Initial insight from continuum models for granular and water flow

    Science.gov (United States)

    Kean, J. W.; McCoy, S. W.; Tucker, G. E.

    2011-12-01

    The cross-sectional shape of high-gradient bedrock channels carved by debris flows is often very similar to that of channels formed by fluvial erosion. Both tend to have narrow U-shapes with width-to-depth ratios much less than 10. Gullies and channels cut into colluvium by both water erosion and debris-flow erosion have similarly narrow geometries. Given that the physics governing debris flow and turbulent water flow are very different, why are channels eroded by these two processes so similar in shape? To begin to investigate this question, we conducted a series of numerical simulations using continuum models for the end-member cases of granular flow and water flow. Each model is used to evolve the steady-state channel shape formed by uniform flow of the respective material. The granular model is based on the constitutive equation for dense granular flow proposed by Jop et al. (Nature, 2006). They demonstrated that without any fitting parameters, a numerical model using this constitutive equation could reproduce the velocity and depth profiles observed in granular-flow laboratory experiments. The model for water flow uses a ray-isovel turbulence closure to calculate the boundary shear stress across the wetted perimeter of the channel. This fully predictive model has also been shown to be in good agreement with laboratory data. We start the calculations for the granular and water-flow cases by determining the velocity and boundary shear-stress fields in an initial V-shape cross section. We then erode both channels using a simple wear law scaled linearly by the bed-normal boundary shear stress. The calculation is repeated until the channel reaches an equilibrium shape. Initial comparisons of the granular and water-flow channels show that they have very similar width-to-depth ratios of about four, and only moderate differences in bottom geometries and boundary shear-stress distributions. The structure of the velocity field differs more substantially between the two

  19. An Improved Nonlinear Five-Point Model for Photovoltaic Modules

    Directory of Open Access Journals (Sweden)

    Sakaros Bogning Dongue

    2013-01-01

    Full Text Available This paper presents an improved nonlinear five-point model capable of analytically describing the electrical behaviors of a photovoltaic module for each generic operating condition of temperature and solar irradiance. The models used to replicate the electrical behaviors of operating PV modules are usually based on some simplified assumptions which provide convenient mathematical model which can be used in conventional simulation tools. Unfortunately, these assumptions cause some inaccuracies, and hence unrealistic economic returns are predicted. As an alternative, we used the advantages of a nonlinear analytical five-point model to take into account the nonideal diode effects and nonlinear effects generally ignored, which PV modules operation depends on. To verify the capability of our method to fit PV panel characteristics, the procedure was tested on three different panels. Results were compared with the data issued by manufacturers and with the results obtained using the five-parameter model proposed by other authors.

  20. Soil hydraulic properties near saturation, an improved conductivity model

    DEFF Research Database (Denmark)

    Børgesen, Christen Duus; Jacobsen, Ole Hørbye; Hansen, Søren;

    2006-01-01

    The hydraulic properties near saturation can change dramatically due to the presence of macropores that are usually difficult to handle in traditional pore size models. The purpose of this study is to establish a data set on hydraulic conductivity near saturation, test the predictive capability...... of commonly used hydraulic conductivity models and give suggestions for improved models. Water retention and near saturated and saturated hydraulic conductivity were measured for a variety of 81 top and subsoils. The hydraulic conductivity models by van Genuchten [van Genuchten, 1980. A closed-form equation...... for predicting the hydraulic conductivity of unsaturated soils. Soil Sci. Soc. Am. J. 44, 892–898.] (vGM) and Brooks and Corey, modified by Jarvis [Jarvis, 1991. MACRO—A Model of Water Movement and Solute Transport in Macroporous Soils. Swedish University of Agricultural Sciences. Department of Soil Sciences...

  1. Improvement of airfoil trailing edge bluntness noise model

    DEFF Research Database (Denmark)

    Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær;

    2016-01-01

    , Pope, and Marcolini airfoil noise prediction model developed by Brooks, Pope, and Marcolini (NASA Reference Publication 1218, 1989). It was found in previous study that the Brooks, Pope, and Marcolini model tends to over-predict noise at high frequencies. Furthermore, it was observed...... that this was caused by a lack in the model to predict accurately noise from blunt trailing edges. For more physical understanding of bluntness noise generation, in this study, we also use an advanced in-house developed high-order computational aero-acoustic technique to investigate the details associated...... with trailing edge bluntness noise. The results from the numerical model form the basis for an improved Brooks, Pope, and Marcolini trailing edge bluntness noise model....

  2. Methods improvements incorporated into the SAPHIRE ASP models

    Energy Technology Data Exchange (ETDEWEB)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)] [and others

    1995-04-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.

  3. An improved transfer-matrix model for optical superlenses.

    Science.gov (United States)

    Moore, Ciaran P; Blaikie, Richard J; Arnold, Matthew D

    2009-08-01

    The use of transfer-matrix analyses for characterizing planar optical superlensing systems is studied here, and the simple model of the planar superlens as an isolated imaging element is shown to be defective in certain situations. These defects arise due to neglected interactions between the superlens and the spatially varying shadow masks that are normally used as scattering objects for imaging, and which are held in near-field proximity to the superlenses. An extended model is proposed that improves the accuracy of the transfer-matrix analysis, without adding significant complexity, by approximating the reflections from the shadow mask by those from a uniform metal layer. Results obtained using both forms of the transfer matrix model are compared to finite element models and two example superlenses, one with a silver monolayer and the other with three silver sublayers, are characterized. The modified transfer matrix model gives much better agreement in both cases.

  4. An Improved Equivalent Simulation Model for CMOS Integrated Hall Plates

    Directory of Open Access Journals (Sweden)

    Yue Xu

    2011-06-01

    Full Text Available An improved equivalent simulation model for a CMOS-integrated Hall plate is described in this paper. Compared with existing models, this model covers voltage dependent non-linear effects, geometrical effects, temperature effects and packaging stress influences, and only includes a small number of physical and technological parameters. In addition, the structure of this model is relatively simple, consisting of a passive network with eight non-linear resistances, four current-controlled voltage sources and four parasitic capacitances. The model has been written in Verilog-A hardware description language and it performed successfully in a Cadence Spectre simulator. The model’s simulation results are in good agreement with the classic experimental results reported in the literature.

  5. An improved model for reduced-order physiological fluid flows

    CERN Document Server

    San, Omer; 10.1142/S0219519411004666

    2012-01-01

    An improved one-dimensional mathematical model based on Pulsed Flow Equations (PFE) is derived by integrating the axial component of the momentum equation over the transient Womersley velocity profile, providing a dynamic momentum equation whose coefficients are smoothly varying functions of the spatial variable. The resulting momentum equation along with the continuity equation and pressure-area relation form our reduced-order model for physiological fluid flows in one dimension, and are aimed at providing accurate and fast-to-compute global models for physiological systems represented as networks of quasi one-dimensional fluid flows. The consequent nonlinear coupled system of equations is solved by the Lax-Wendroff scheme and is then applied to an open model arterial network of the human vascular system containing the largest fifty-five arteries. The proposed model with functional coefficients is compared with current classical one-dimensional theories which assume steady state Hagen-Poiseuille velocity pro...

  6. Estimating Evapotranspiration from an Improved Two-Source Energy Balance Model Using ASTER Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Qifeng Zhuang

    2015-11-01

    Full Text Available Reliably estimating the turbulent fluxes of latent and sensible heat at the Earth’s surface by remote sensing is important for research on the terrestrial hydrological cycle. This paper presents a practical approach for mapping surface energy fluxes using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER images from an improved two-source energy balance (TSEB model. The original TSEB approach may overestimate latent heat flux under vegetative stress conditions, as has also been reported in recent research. We replaced the Priestley-Taylor equation used in the original TSEB model with one that uses plant moisture and temperature constraints based on the PT-JPL model to obtain a more accurate canopy latent heat flux for model solving. The collected ASTER data and field observations employed in this study are over corn fields in arid regions of the Heihe Watershed Allied Telemetry Experimental Research (HiWATER area, China. The results were validated by measurements from eddy covariance (EC systems, and the surface energy flux estimates of the improved TSEB model are similar to the ground truth. A comparison of the results from the original and improved TSEB models indicates that the improved method more accurately estimates the sensible and latent heat fluxes, generating more precise daily evapotranspiration (ET estimate under vegetative stress conditions.

  7. Similar Materials in Large-scale Shaking Table Model Test%大型振动台试验相似材料研究

    Institute of Scientific and Technical Information of China (English)

    邹威; 许强; 刘汉香

    2012-01-01

    On the basis of analyzing and summarizing the research status of similar materials of rock which were u-sually used in physical simulation experiments of geological engineering in recent years, using the similar materials of shaking table model tests compounded by barite powder, quartz sand , gypsum and glycerol, through mechanical test of different proportion of materials, and considering the water content of similar materials, the contents of all kinds of materials and the regularity of the influence of the water content changing on the physical and mechanical properties of similar materials were studies and analyzed. It was found that water content had great influence on the compressive strength, cohesion, elastic modulus and friction angle of materials. According to the result, the proportion of similar material in shaking table model test was fixed finally. It was proved by the test that the mechanical index of this kind of material was stale, could meet the needs of the choice requirements of similar materials, convenient for pouring relatively large-scale model specimen once only. It applies to physical simulation experiments like large-scale shaking table model test.%在分析总结近几年岩土工程物理模拟试验中常用的岩质相似材料研究现状的基础上,采用重晶石粉、石英砂、石膏、甘油配制振动台模型试验相似材料,通过不同配比的材料力学试验,考虑相似材料的含水率,分析研究了各种材料含量以及含水率变化对相似材料物理力学性质的影响规律,发现含水率对材料的抗压强度、粘聚力、弹性模量、摩擦角有较大影响,根据试验结果最终确定了振动台试验相似材料配比.试验结果证明,该相似材料力学指标稳定,能很好地满足相似材料的选材要求,便于一次性浇注较大尺寸规模的模型试件,适用于大型振动台等物理模拟试验.

  8. An Improved Direction Relation Detection Model for Spatial Objects

    Institute of Scientific and Technical Information of China (English)

    FENG Yucai; YI Baolin

    2004-01-01

    Direction is a common spatial concept that is used in our daily life. It is frequently used as a selection condition in spatial queries. As a result, it is important for spatial databases to provide a mechanism for modeling and processing direction queries and reasoning. Depending on the direction relation matrix, an inverted direction relation matrix and the concept of direction pre- dominance are proposed to improve the detection of direction relation between objects. Direction predicates of spatial systems are also extended. These techniques can improve the veracity of direction queries and reasoning. Experiments show excellent efficiency and performance in view of direction queries.

  9. Improving PARSEC models for very low mass stars

    CERN Document Server

    Chen, Yang; Bressan, Alessandro; Marigo, Paola; Barbieri, Mauro; Kong, Xu

    2014-01-01

    Many stellar models present difficulties in reproducing basic observational relations of very low mass stars (VLMS), including the mass--radius relation and the optical colour--magnitudes of cool dwarfs. Here, we improve PARSEC models on these points. We implement the T--tau relations from PHOENIX BT-Settl model atmospheres as the outer boundary conditions in the PARSEC code, finding that this change alone reduces the discrepancy in the mass--radius relation from 8 to 5 per cent. We compare the models with multi--band photometry of clusters Praesepe and M67, showing that the use of T--tau relations clearly improves the description of the optical colours and magnitudes. But anyway, using both Kurucz and PHOENIX model spectra, model colours are still systematically fainter and bluer than the observations. We then apply a shift to the above T--tau relations, increasing from 0 at T_eff = 4730 K to ~14% at T_eff = 3160 K, to reproduce the observed mass--radius radius relation of dwarf stars. Taking this experiment...

  10. Improving distributed hydrologic modeling and global land cover data

    Science.gov (United States)

    Broxton, Patrick

    Distributed models of the land surface are essential for global climate models because of the importance of land-atmosphere exchanges of water, energy, momentum. They are also used for high resolution hydrologic simulation because of the need to capture non-linear responses to spatially variable inputs. Continued improvements to these models, and the data which they use, is especially important given ongoing changes in climate and land cover. In hydrologic models, important aspects are sometimes neglected due to the need to simplify the models for operational simulation. For example, operational flash flood models do not consider the role of snow and are often lumped (i.e. do not discretize a watershed into multiple units, and so do not fully consider the effect of intense, localized rainstorms). To address this deficiency, an overland flow model is coupled with a subsurface flow model to create a distributed flash flood forecasting system that can simulate flash floods that involve rain on snow. The model is intended for operational use, and there are extensive algorithms to incorporate high-resolution hydrometeorologic data, to assist in the calibration of the models, and to run the model in real time. A second study, which is designed to improve snow simulation in forested environments, demonstrates the importance of explicitly representing a near canopy environment in snow models, instead of only representing open and canopy covered areas (i.e. with % canopy fraction), as is often done. Our modeling, which uses canopy structure information from Aerial Laser Survey Mapping at 1 meter resolution, suggests that areas near trees have more net snow water input than surrounding areas because of the lack of snow interception, shading by the trees, and the effects of wind. In addition, the greatest discrepancy between our model simulations that explicitly represent forest structure and those that do not occur in areas with more canopy edges. In addition, two value

  11. Ab initio structural modeling of and experimental validation for Chlamydia trachomatis protein CT296 reveal structural similarity to Fe(II) 2-oxoglutarate-dependent enzymes

    Energy Technology Data Exchange (ETDEWEB)

    Kemege, Kyle E.; Hickey, John M.; Lovell, Scott; Battaile, Kevin P.; Zhang, Yang; Hefty, P. Scott (Michigan); (Kansas); (HWMRI)

    2012-02-13

    Chlamydia trachomatis is a medically important pathogen that encodes a relatively high percentage of proteins with unknown function. The three-dimensional structure of a protein can be very informative regarding the protein's functional characteristics; however, determining protein structures experimentally can be very challenging. Computational methods that model protein structures with sufficient accuracy to facilitate functional studies have had notable successes. To evaluate the accuracy and potential impact of computational protein structure modeling of hypothetical proteins encoded by Chlamydia, a successful computational method termed I-TASSER was utilized to model the three-dimensional structure of a hypothetical protein encoded by open reading frame (ORF) CT296. CT296 has been reported to exhibit functional properties of a divalent cation transcription repressor (DcrA), with similarity to the Escherichia coli iron-responsive transcriptional repressor, Fur. Unexpectedly, the I-TASSER model of CT296 exhibited no structural similarity to any DNA-interacting proteins or motifs. To validate the I-TASSER-generated model, the structure of CT296 was solved experimentally using X-ray crystallography. Impressively, the ab initio I-TASSER-generated model closely matched (2.72-{angstrom} C{alpha} root mean square deviation [RMSD]) the high-resolution (1.8-{angstrom}) crystal structure of CT296. Modeled and experimentally determined structures of CT296 share structural characteristics of non-heme Fe(II) 2-oxoglutarate-dependent enzymes, although key enzymatic residues are not conserved, suggesting a unique biochemical process is likely associated with CT296 function. Additionally, functional analyses did not support prior reports that CT296 has properties shared with divalent cation repressors such as Fur.

  12. Loss-improved electroacoustical modeling of small Helmholtz resonators.

    Science.gov (United States)

    Starecki, Tomasz

    2007-10-01

    Modeling of small Helmholtz resonators based on electroacoustical analogies often results in significant disagreement with measurements, as existing models do not take into account some losses that are observed in practical implementations of such acoustical circuits, e.g., in photoacoustic Helmholtz cells. The paper presents a method which introduces loss corrections to the transmission line model, resulting in substantial improvement of simulations. Values of the loss corrections obtained from comparison of frequency responses of practically implemented resonators with computer simulations are presented in tabular and graphical form. A simple analytical function that can be used for interpolation or extrapolation of the loss corrections for other dimensions of the Helmholtz resonators is also given. Verification of such a modeling method against an open two-cavity Helmholtz structure shows very good agreement between measurements and simulations.

  13. An improved source model for aircraft interior noise studies

    Science.gov (United States)

    Mahan, J. R.; Fuller, C. R.

    1985-01-01

    There is concern that advanced turboprop engines currently being developed may produce excessive aircraft cabin noise level. This concern has stimulated renewed interest in developing aircraft interior noise reduction methods that do not significnatly increase take off weight. An existing analytical model for noise transmission into aircraft cabins was utilized to investigate the behavior of an improved propeller source model for use in aircraft interior noise studies. The new source model, a virtually rotating dipole, is shown to adequately match measured fuselage sound pressure distributions, including the correct phase relationships, for published data. The virtually rotating dipole is used to study the sensitivity of synchrophasing effectiveness to the fuselage sound pressure trace velocity distribution. Results of calculations are presented which reveal the importance of correctly modeling the surface pressure phase relations in synchrophasing and other aircraft interior noise studies.

  14. Improved Wave-vessel Transfer Functions by Uncertainty Modelling

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Fønss Bach, Kasper; Iseki, Toshio

    2016-01-01

    This paper deals with uncertainty modelling of wave-vessel transfer functions used to calculate or predict wave-induced responses of a ship in a seaway. Although transfer functions, in theory, can be calculated to exactly reflect the behaviour of the ship when exposed to waves, uncertainty in input...... variables, notably speed, draft and relative wave eading, often compromises results. In this study, uncertling is applied to improve theoretically calculated transfer functions, so they better fit the corresponding experimental, full-scale ones. Based on a vast amount of full-scale measurements data......, it is shown that uncertainty modelling can be successfully used to improve accuracy (and reliability) of theoretical transfer functions....

  15. Process-Improvement Cost Model for the Emergency Department.

    Science.gov (United States)

    Dyas, Sheila R; Greenfield, Eric; Messimer, Sherri; Thotakura, Swati; Gholston, Sampson; Doughty, Tracy; Hays, Mary; Ivey, Richard; Spalding, Joseph; Phillips, Robin

    2015-01-01

    The objective of this report is to present a simplified, activity-based costing approach for hospital emergency departments (EDs) to use with Lean Six Sigma cost-benefit analyses. The cost model complexity is reduced by removing diagnostic and condition-specific costs, thereby revealing the underlying process activities' cost inefficiencies. Examples are provided for evaluating the cost savings from reducing discharge delays and the cost impact of keeping patients in the ED (boarding) after the decision to admit has been made. The process-improvement cost model provides a needed tool in selecting, prioritizing, and validating Lean process-improvement projects in the ED and other areas of patient care that involve multiple dissimilar diagnoses.

  16. Modelling Niche Differentiation of Co-Existing, Elusive and Morphologically Similar Species: A Case Study of Four Macaque Species in Nakai-Nam Theun National Protected Area, Laos

    Directory of Open Access Journals (Sweden)

    Camille N. Z. Coudrat

    2013-01-01

    Full Text Available Species misidentification often occurs when dealing with co-existing and morphologically similar species such as macaques, making the study of their ecology challenging. To overcome this issue, we use reliable occurrence data from camera-trap images and transect survey data to model their respective ecological niche and potential distribution locally in Nakai-Nam Theun National Protected Area (NNT NPA, central-Eastern Laos. We investigate niche differentiation of morphologically similar species using four sympatric macaque species in NNT NPA, as our model species: rhesus Macaca mulatta (Taxonomic Serial Number, TSN 180099, Northern pig-tailed M. leonina (TSN not listed; Assamese M. assamensis (TSN 573018 and stump-tailed M. arctoides (TSN 573017. We examine the implications for their conservation. We obtained occurrence data of macaque species from systematic 2006–2011 camera-trapping surveys and 2011–2012 transect surveys and model their niche and potential distribution with MaxEnt software using 25 environmental and topographic variables. The respective suitable habitat predicted for each species reveals niche segregation between the four species with a gradual geographical distribution following an environmental gradient within the study area. Camera-trapping positioned at many locations can increase elusive-species records with a relatively reduced and more systematic sampling effort and provide reliable species occurrence data. These can be used for environmental niche modelling to study niche segregation of morphologically similar species in areas where their distribution remains uncertain. Examining unresolved species' niches and potential distributions can have crucial implications for future research and species' management and conservation even in the most remote regions and for the least-known species.

  17. Modelling Niche Differentiation of Co-Existing, Elusive and Morphologically Similar Species: A Case Study of Four Macaque Species in Nakai-Nam Theun National Protected Area, Laos.

    Science.gov (United States)

    Coudrat, Camille N Z; Nekaris, K Anne-Isola

    2013-01-30

    Species misidentification often occurs when dealing with co-existing and morphologically similar species such as macaques, making the study of their ecology challenging. To overcome this issue, we use reliable occurrence data from camera-trap images and transect survey data to model their respective ecological niche and potential distribution locally in Nakai-Nam Theun National Protected Area (NNT NPA), central-Eastern Laos. We investigate niche differentiation of morphologically similar species using four sympatric macaque species in NNT NPA, as our model species: rhesus Macaca mulatta (Taxonomic Serial Number, TSN 180099), Northern pig-tailed M. leonina (TSN not listed); Assamese M. assamensis (TSN 573018) and stump-tailed M. arctoides (TSN 573017). We examine the implications for their conservation. We obtained occurrence data of macaque species from systematic 2006-2011 camera-trapping surveys and 2011-2012 transect surveys and model their niche and potential distribution with MaxEnt software using 25 environmental and topographic variables. The respective suitable habitat predicted for each species reveals niche segregation between the four species with a gradual geographical distribution following an environmental gradient within the study area. Camera-trapping positioned at many locations can increase elusive-species records with a relatively reduced and more systematic sampling effort and provide reliable species occurrence data. These can be used for environmental niche modelling to study niche segregation of morphologically similar species in areas where their distribution remains uncertain. Examining unresolved species' niches and potential distributions can have crucial implications for future research and species' management and conservation even in the most remote regions and for the least-known species.

  18. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  19. Topography Image Segmentation Based on Improved Chan-Vese Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Min-rong; ZHANG Xi-wen; JIANG Juan-na

    2013-01-01

    Aiming to solve the inefficient segmentation in traditional C-V model for complex topography image and time-consuming process caused by the level set function solving with partial differential, an improved Chan-Vese model is presented in this paper. With the good performances of maintaining topological properties of the traditional level set method and avoiding the numerical so-lution of partial differential, the same segmentation results could be easily obtained. Thus, a stable foundation for rapid segmenta-tion-based on image reconstruction identification is established.

  20. Improving intellectual capital model using analytic network process

    Directory of Open Access Journals (Sweden)

    Ratapol Wudhikarn

    2013-06-01

    Full Text Available This study proposes a new approach to prioritize the key company’s indicators and relative elements following process model of intellectual capital (IC. The IC is improved by the application ofanalytic network process (ANP. The ANP provides the weights and priorities to all focused key performance indicators (KPIs serving to the business concept. These obtained weights can also be passedto other relative elements, those of key success factors (KSFs and IC categories, in the process model of IC. These prioritized KPIs, KSFs and IC categories assist the managers and decision-makers to focus on the crucial elements that mostly affect the business concept.

  1. Does better rainfall interpolation improve hydrological model performance?

    Science.gov (United States)

    Bàrdossy, Andràs; Kilsby, Chris; Lewis, Elisabeth

    2017-04-01

    High spatial variability of precipitation is one of the main sources of uncertainty in rainfall/runoff modelling. Spatially distributed models require detailed space time information on precipitation as input. In the past decades a lot of effort was spent on improving precipitation interpolation using point observations. Different geostatistical methods like Ordinary Kriging, External Drift Kriging or Copula based interpolation can be used to find the best estimators for unsampled locations. The purpose of this work is to investigate to what extents more sophisticated precipitation estimation methods can improve model performance. For this purpose the Wye catchment in Wales was selected. The physically-based spatially-distributed hydrological model SHETRAN is used to describe the hydrological processes in the catchment. 31 raingauges with 1 hourly temporal resolution are available for a time period of 6 years. In order to avoid the effect of model uncertainty model parameters were not altered in this study. Instead 100 random subsets consisting of 14 stations each were selected. For each of the configurations precipitation was interpolated for each time step using nearest neighbor (NN), inverse distance (ID) and Ordinary Kriging (OK). The variogram was obtained using the temporal correlation of the time series measured at different locations. The interpolated data were used as input for the spatially distributed model. Performance was evaluated for daily mean discharges using the Nash-Sutcliffe coefficient, temporal correlations, flow volumes and flow duration curves. The results show that the simplest NN and the sophisticated OK performances are practically equally good, while ID performed worse. NN was often better for high flows. The reason for this is that NN does not reduce the variance, while OK and ID yield smooth precipitation fields. The study points out the importance of precipitation variability and suggests the use of conditional spatial simulation as

  2. Model improvements for tritium transport in DEMO fuel cycle

    Energy Technology Data Exchange (ETDEWEB)

    Santucci, Alessia, E-mail: alessia.santucci@enea.it [Unità Tecnica Fusione – ENEA C. R. Frascati, Via E. Fermi 45, 00044 Frascati (Roma) (Italy); Tosti, Silvano [Unità Tecnica Fusione – ENEA C. R. Frascati, Via E. Fermi 45, 00044 Frascati (Roma) (Italy); Franza, Fabrizio [Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany)

    2015-10-15

    Highlights: • T inventory and permeation of DEMO blankets have been assessed under pulsed operation. • 1-D model for T transport has been developed for the HCLL DEMO blanket. • The 1-D model evaluated T partial pressure and T permeation rate radial profiles. - Abstract: DEMO operation requires a large amount of tritium, which is directly produced inside the reactor by means of Li-based breeders. During its production, recovering and purification, tritium comes in contact with large surfaces of hot metallic walls, therefore it can permeate through the blanket cooling structure, reach the steam generator and finally the environment. The development of dedicated simulation tools able to predict tritium losses and inventories is necessary to verify the accomplishment of the accepted tritium environmental releases as well as to guarantee a correct machine operation. In this work, the FUS-TPC code is improved by including the possibility to operate in pulsed regime: results in terms of tritium inventory and losses for three pulsed scenarios are shown. Moreover, the development of a 1-D model considering the radial profile of the tritium generation is described. By referring to the inboard segment on the equatorial axis of the helium-cooled lithium–lead (HCLL) blanket, preliminary results of the 1-D model are illustrated: tritium partial pressure in Li–Pb and tritium permeation in the cooling and stiffening plates by assuming several permeation reduction factor (PRF) values. Future improvements will consider the application of the model to all segments of different blanket concepts.

  3. A time-dependent model for improved biogalvanic tissue characterisation.

    Science.gov (United States)

    Chandler, J H; Culmer, P R; Jayne, D G; Neville, A

    2015-10-01

    Measurement of the passive electrical resistance of biological tissues through biogalvanic characterisation has been proposed as a simple means of distinguishing healthy from diseased tissue. This method has the potential to provide valuable real-time information when integrated into surgical tools. Characterised tissue resistance values have been shown to be particularly sensitive to external load switching direction and rate, bringing into question the stability and efficacy of the technique. These errors are due to transient variations observed in measurement data that are not accounted for in current electrical models. The presented research proposes the addition of a time-dependent element to the characterisation model to account for losses associated with this transient behaviour. Influence of switching rate has been examined, with the inclusion of transient elements improving the repeatability of the characterised tissue resistance. Application of this model to repeat biogalvanic measurements on a single ex vivo human colon tissue sample with healthy and cancerous (adenocarcinoma) regions showed a statistically significant difference (p  0.05) between tissue types was found when measurements were subjected to the current model, suggesting that the proposed model may allow for improved biogalvanic tissue characterisation. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Improved head-driven statistical models for natural language parsing

    Institute of Scientific and Technical Information of China (English)

    袁里驰

    2013-01-01

    Head-driven statistical models for natural language parsing are the most representative lexicalized syntactic parsing models, but they only utilize semantic dependency between words, and do not incorporate other semantic information such as semantic collocation and semantic category. Some improvements on this distinctive parser are presented. Firstly, "valency" is an essential semantic feature of words. Once the valency of word is determined, the collocation of the word is clear, and the sentence structure can be directly derived. Thus, a syntactic parsing model combining valence structure with semantic dependency is purposed on the base of head-driven statistical syntactic parsing models. Secondly, semantic role labeling(SRL) is very necessary for deep natural language processing. An integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Experiments are conducted for the refined statistical parser. The results show that 87.12% precision and 85.04% recall are obtained, and F measure is improved by 5.68% compared with the head-driven parsing model introduced by Collins.

  5. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  6. Active surface model improvement by energy function optimization for 3D segmentation.

    Science.gov (United States)

    Azimifar, Zohreh; Mohaddesi, Mahsa

    2015-04-01

    This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models.

  7. Improved animal models for testing gene therapy for atherosclerosis.

    Science.gov (United States)

    Du, Liang; Zhang, Jingwan; De Meyer, Guido R Y; Flynn, Rowan; Dichek, David A

    2014-04-01

    Gene therapy delivered to the blood vessel wall could augment current therapies for atherosclerosis, including systemic drug therapy and stenting. However, identification of clinically useful vectors and effective therapeutic transgenes remains at the preclinical stage. Identification of effective vectors and transgenes would be accelerated by availability of animal models that allow practical and expeditious testing of vessel-wall-directed gene therapy. Such models would include humanlike lesions that develop rapidly in vessels that are amenable to efficient gene delivery. Moreover, because human atherosclerosis develops in normal vessels, gene therapy that prevents atherosclerosis is most logically tested in relatively normal arteries. Similarly, gene therapy that causes atherosclerosis regression requires gene delivery to an existing lesion. Here we report development of three new rabbit models for testing vessel-wall-directed gene therapy that either prevents or reverses atherosclerosis. Carotid artery intimal lesions in these new models develop within 2-7 months after initiation of a high-fat diet and are 20-80 times larger than lesions in a model we described previously. Individual models allow generation of lesions that are relatively rich in either macrophages or smooth muscle cells, permitting testing of gene therapy strategies targeted at either cell type. Two of the models include gene delivery to essentially normal arteries and will be useful for identifying strategies that prevent lesion development. The third model generates lesions rapidly in vector-naïve animals and can be used for testing gene therapy that promotes lesion regression. These models are optimized for testing helper-dependent adenovirus (HDAd)-mediated gene therapy; however, they could be easily adapted for testing of other vectors or of different types of molecular therapies, delivered directly to the blood vessel wall. Our data also supports the promise of HDAd to deliver long

  8. Towards an Improved Performance Measure for Language Models

    CERN Document Server

    Ueberla, J P

    1997-01-01

    In this paper a first attempt at deriving an improved performance measure for language models, the probability ratio measure (PRM) is described. In a proof of concept experiment, it is shown that PRM correlates better with recognition accuracy and can lead to better recognition results when used as the optimisation criterion of a clustering algorithm. Inspite of the approximations and limitations of this preliminary work, the results are very encouraging and should justify more work along the same lines.

  9. Modeling and Simulation of Ceramic Arrays to Improve Ballistic Performance

    Science.gov (United States)

    2014-04-30

    distribution is Unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT -Develop Modeling and Simulation tools, use Depth of Penetration ( DOP ) as metric...7.62 APM2 -Evaluate SiC tile on Aluminum with material properties from literature -Develop seam designs to improve performance, demonstrate with DOP ...5083, SiC, DoP Expeminets, AutoDyn Sin 16. SECURITY CLASSIFICATION OF: UU a. REPORT b. ABSTRACT c. THIS PAGE 17. LIMITATION OF ABSTRACT UU 18

  10. Can fire atlas data improve species distribution model projections?

    Science.gov (United States)

    Crimmins, Shawn M; Dobrowski, Solomon Z; Mynsberge, Alison R; Safford, Hugh D

    2014-07-01

    Correlative species distribution models (SDMs) are widely used in studies of climate change impacts, yet are often criticized for failing to incorporate disturbance processes that can influence species distributions. Here we use two temporally independent data sets of vascular plant distributions, climate data, and fire atlas data to examine the influence of disturbance history on SDM projection accuracy through time in the mountain ranges of California, USA. We used hierarchical partitioning to examine the influence of fire occurrence on the distribution of 144 vascular plant species and built a suite of SDMs to examine how the inclusion of fire-related predictors (fire occurrence and departure from historical fire return intervals) affects SDM projection accuracy. Fire occurrence provided the least explanatory power among predictor variables for predicting species' distributions, but provided improved explanatory power for species whose regeneration is tied closely to fire. A measure of the departure from historic fire return interval had greater explanatory power for calibrating modern SDMs than fire occurrence. This variable did not improve internal model accuracy for most species, although it did provide marginal improvement to models for species adapted to high-frequency fire regimes. Fire occurrence and fire return interval departure were strongly related to the climatic covariates used in SDM development, suggesting that improvements in model accuracy may not be expected due to limited additional explanatory power. Our results suggest that the inclusion of coarse-scale measures of disturbance in SDMs may not be necessary to predict species distributions under climate change, particularly for disturbance processes that are largely mediated by climate.

  11. Improved quark coalescence for a multi-phase transport model

    Science.gov (United States)

    He, Yuncun; Lin, Zi-Wei

    2017-07-01

    The string melting version of a multi-phase transport model is often applied to high-energy heavy-ion collisions since the dense matter thus formed is expected to be in parton degrees of freedom. In this work we improve its quark coalescence component, which describes the hadronization of the partonic matter to a hadronic matter. We removed the previous constraint that forced the numbers of mesons, baryons, and antibaryons in an event to be separately conserved through the quark coalescence process. A quark now could form either a meson or a baryon depending on the distance to its coalescence partner(s). We then compare results from the improved model with the experimental data on hadron d N /d y ,pT spectra, and v2 in heavy-ion collisions from √{s NN}=62.4 GeV to 5.02 TeV. We show that, besides being able to describe these observables for low-pTpions and kaons, the improved model also better describes the low-p T baryon observables in general, especially the baryon p T spectra and antibaryon-to-baryon ratios for multistrange baryons.

  12. Improved Gaussian Mixture Models for Adaptive Foreground Segmentation

    DEFF Research Database (Denmark)

    Katsarakis, Nikolaos; Pnevmatikakis, Aristodemos; Tan, Zheng-Hua

    2016-01-01

    Adaptive foreground segmentation is traditionally performed using Stauffer & Grimson’s algorithm that models every pixel of the frame by a mixture of Gaussian distributions with continuously adapted parameters. In this paper we provide an enhancement of the algorithm by adding two important dynamic...... elements to the baseline algorithm: The learning rate can change across space and time, while the Gaussian distributions can be merged together if they become similar due to their adaptation process. We quantify the importance of our enhancements and the effect of parameter tuning using an annotated...

  13. Displacement Monitoring Precision of Similarly Model with Close-range Photogrammetry System%近景摄影测量系统在相似模型位移监测中精度分析

    Institute of Scientific and Technical Information of China (English)

    朱庆伟; 信泰琦; 孙学阳

    2016-01-01

    针对相似材料模型试验现有观测方法所存在的工作量大、 自动化程度低等缺点,采用近景摄影测量系统进行相似材料模型试验的位移观测;分析了拍摄光照环境、 拍摄位置、 观测距离及布点方案等因素对近景摄影测量系统解算精度的影响.通过试验得到了最佳数据采集方案,进一步提高了系统的解算精度,是相似材料模型位移观测的有效方法.%In order to solve the problems of heavy workload and low automation of traditional surveying method in similarly material model experiment, close-range photogrammetry system was used to displacement monitoring in similarly material model experiment, some factors that influenced calculation precision of close-range photogrammetry were analyzed, which included luminous environment, position, surveying distance and points distribution scheme and so on.The best data collection scheme was put forward after experi-ment, the calculation precision was improved future, it was an effective method for displacement monitoring of similarly material model experiment.

  14. Both food restriction and high-fat diet during gestation induce low birth weight and altered physical activity in adult rat offspring: the "Similarities in the Inequalities" model.

    Directory of Open Access Journals (Sweden)

    Fábio da Silva Cunha

    Full Text Available We have previously described a theoretical model in humans, called "Similarities in the Inequalities", in which extremely unequal social backgrounds coexist in a complex scenario promoting similar health outcomes in adulthood. Based on the potential applicability of and to further explore the "similarities in the inequalities" phenomenon, this study used a rat model to investigate the effect of different nutritional backgrounds during gestation on the willingness of offspring to engage in physical activity in adulthood. Sprague-Dawley rats were time mated and randomly allocated to one of three dietary groups: Control (Adlib, receiving standard laboratory chow ad libitum; 50% food restricted (FR, receiving 50% of the ad libitum-fed dam's habitual intake; or high-fat diet (HF, receiving a diet containing 23% fat. The diets were provided from day 10 of pregnancy until weaning. Within 24 hours of birth, pups were cross-fostered to other dams, forming the following groups: Adlib_Adlib, FR_Adlib, and HF_Adlib. Maternal chow consumption and weight gain, and offspring birth weight, growth, physical activity (one week of free exercise in running wheels, abdominal adiposity and biochemical data were evaluated. Western blot was performed to assess D2 receptors in the dorsal striatum. The "similarities in the inequalities" effect was observed on birth weight (both FR and HF groups were smaller than the Adlib group at birth and physical activity (both FR_Adlib and HF_Adlib groups were different from the Adlib_Adlib group, with less active males and more active females. Our findings contribute to the view that health inequalities in fetal life may program the health outcomes manifested in offspring adult life (such as altered physical activity and metabolic parameters, probably through different biological mechanisms.

  15. Visualizing multidimensional data similarities : Improvements and applications

    NARCIS (Netherlands)

    Rodrigues Oliveira da Silva, Renato

    2016-01-01

    Multidimensional data is increasingly more prominent and important in many application domains. Such data typically consist of a large set of elements, each of which described by several measurements (dimensions). During the design of techniques and tools to process this data, a key component is to

  16. Tests of the improved Weiland ion temperature gradient transport model

    Energy Technology Data Exchange (ETDEWEB)

    Kinsey, J.E.; Bateman, G.; Kritz, A.H. [Lehigh Univ., Bethlehem, PA (United States)] [and others

    1996-12-31

    The Weiland theoretically derived transport model for ion temperature gradient and trapped electron modes has been improved to include the effects of parallel ion motion, finite beta, and collisionality. The model also includes the effects of impurities, fast ions, unequal ion and electron temperatures, and finite Larmor radius. This new model has been implemented in our time-dependent transport code and is used in conjunction with pressure-driven modes and neoclassical theory to predict the radial particle and thermal transport in tokamak plasmas. Simulations of TFTR, DIII-D, and JET L-mode plasmas have been conducted to test how the new effects change the predicted density and temperature profiles. Comparisons are made with results obtained using the previous version of the model which was successful in reproducing experimental data from a wide variety of tokamak plasmas. Specifically, the older model has been benchmarked against over 50 discharges from at least 7 different tokamaks including L-mode scans in current, heating power, density, and dimensionless scans in normalized gyro-radius, collisionality, and beta. We have also investigated the non-diffusive elements included in the Weiland model, particularly the particle pinch in order to characterize its behavior. This is partly motivated by recent simulations of ITER. In those simulations, the older Weiland model predicted a particle pinch and ignition was more easily obtained.

  17. [Improvement of genetics teaching using literature-based learning model].

    Science.gov (United States)

    Liang, Liang; Shiqian, Liang; Hongyan, Qin; Yong, Ji; Hua, Han

    2015-06-01

    Genetics is one of the most important courses for undergraduate students majoring in life science. In recent years, new knowledge and technologies are continually updated with deeper understanding of life science. However, the teaching model of genetics is still based on theoretical instruction, which makes the abstract principles hard to understand by students and directly affects the teaching effect. Thus, exploring a new teaching model is necessary. We have carried out a new teaching model, literature-based learning, in the course on Microbial Genetics for undergraduate students majoring in biotechnology since 2010. Here we comprehensively analyzed the implementation and application value of this model including pre-course knowledge, how to choose professional literature, how to organize teaching process and the significance of developing this new teaching model for students and teachers. Our literature-based learning model reflects the combination of "cutting-edge" and "classic" and makes book knowledge easy to understand, which improves students' learning effect, stimulates their interests, expands their perspectives and develops their ability. This practice provides novel insight into exploring new teaching model of genetics and cultivating medical talents capable of doing both basic and clinical research in the "precision medicine" era.

  18. Hydrogeological modeling for improving groundwater monitoring network and strategies

    Science.gov (United States)

    Thakur, Jay Krishna

    2016-09-01

    The research aimed to investigate a new approach for spatiotemporal groundwater monitoring network optimization using hydrogeological modeling to improve monitoring strategies. Unmonitored concentrations were incorporated at different potential monitoring locations into the groundwater monitoring optimization method. The proposed method was applied in the contaminated megasite, Bitterfeld/Wolfen, Germany. Based on an existing 3-D geological model, 3-D groundwater flow was obtained from flow velocity simulation using initial and boundary conditions. The 3-D groundwater transport model was used to simulate transport of α-HCH with an initial ideal concentration of 100 mg/L injected at various hydrogeological layers in the model. Particle tracking for contaminant and groundwater flow velocity realizations were made. The spatial optimization result suggested that 30 out of 462 wells in the Quaternary aquifer (6.49 %) and 14 out of 357 wells in the Tertiary aquifer (3.92 %) were redundant. With a gradual increase in the width of the particle track path line, from 0 to 100 m, the number of redundant wells remarkably increased, in both aquifers. The results of temporal optimization showed different sampling frequencies for monitoring wells. The groundwater and contaminant flow direction resulting from particle tracks obtained from hydrogeological modeling was verified by the variogram modeling through α-HCH data from 2003 to 2009. Groundwater monitoring strategies can be substantially improved by removing the existing spatio-temporal redundancy as well as incorporating unmonitored network along with sampling at recommended interval of time. However, the use of this model-based method is only recommended in the areas along with site-specific experts' knowledge.

  19. An improved experimental model for peripheral neuropathy in rats

    Directory of Open Access Journals (Sweden)

    Q.M. Dias

    Full Text Available A modification of the Bennett and Xie chronic constriction injury model of peripheral painful neuropathy was developed in rats. Under tribromoethanol anesthesia, a single ligature with 100% cotton glace thread was placed around the right sciatic nerve proximal to its trifurcation. The change in the hind paw reflex threshold after mechanical stimulation observed with this modified model was compared to the change in threshold observed in rats subjected to the Bennett and Xie or the Kim and Chung spinal ligation models. The mechanical threshold was measured with an automated electronic von Frey apparatus 0, 2, 7, and 14 days after surgery, and this threshold was compared to that measured in sham rats. All injury models produced significant hyperalgesia in the operated hind limb. The modified model produced mean ± SD thresholds in g (19.98 ± 3.08, 14.98 ± 1.86, and 13.80 ± 1.00 at 2, 7, and 14 days after surgery, respectively similar to those obtained with the spinal ligation model (20.03 ± 1.99, 13.46 ± 2.55, and 12.46 ± 2.38 at 2, 7, and 14 days after surgery, respectively, but less variable when compared to the Bennett and Xie model (21.20 ± 8.06, 18.61 ± 7.69, and 18.76 ± 6.46 at 2, 7, and 14 days after surgery, respectively. The modified method required less surgical skill than the spinal nerve ligation model.

  20. An improved experimental model for peripheral neuropathy in rats

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Q.M.; Rossaneis, A.C.; Fais, R.S.; Prado, W.A. [Departamento de Farmacologia, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP (Brazil)

    2013-03-15

    A modification of the Bennett and Xie chronic constriction injury model of peripheral painful neuropathy was developed in rats. Under tribromoethanol anesthesia, a single ligature with 100% cotton glace thread was placed around the right sciatic nerve proximal to its trifurcation. The change in the hind paw reflex threshold after mechanical stimulation observed with this modified model was compared to the change in threshold observed in rats subjected to the Bennett and Xie or the Kim and Chung spinal ligation models. The mechanical threshold was measured with an automated electronic von Frey apparatus 0, 2, 7, and 14 days after surgery, and this threshold was compared to that measured in sham rats. All injury models produced significant hyperalgesia in the operated hind limb. The modified model produced mean ± SD thresholds in g (19.98 ± 3.08, 14.98 ± 1.86, and 13.80 ± 1.00 at 2, 7, and 14 days after surgery, respectively) similar to those obtained with the spinal ligation model (20.03 ± 1.99, 13.46 ± 2.55, and 12.46 ± 2.38 at 2, 7, and 14 days after surgery, respectively), but less variable when compared to the Bennett and Xie model (21.20 ± 8.06, 18.61 ± 7.69, and 18.76 ± 6.46 at 2, 7, and 14 days after surgery, respectively). The modified method required less surgical skill than the spinal nerve ligation model.

  1. Similar evolution in delta 13CH4 and model-predicted relative rate of aceticlastic methanogenesis during mesophilic methanization of municipal solid wastes.

    Science.gov (United States)

    Vavilin, V A; Qu, X; Qu, X; Mazéas, L; Lemunier, M; Duquennoi, C; Mouchel, J M; He, P; Bouchez, T

    2009-01-01

    Similar evolution was obtained for the stable carbon isotope signatures delta (13)CH(4) and the model-predicted relative rate of aceticlastic methanogenesis during mesophilic methanization of municipal solid wastes. In batch incubations, the importance of aceticlastic and hydrogenotrophic methanogenesis changes in time. Initially, hydrogenotrophic methanogenesis dominated, but increasing population of Methanosarcina sp. enhances aceticlastic methanogenesis. Later, hydrogenotrophic methanogenesis intensified again. A mathematical model was developed to evaluate the relative contribution of hydrogenotrophic and aceticlastic pathways of methane generation during mesophilic batch anaerobic biodegradation of the French and the Chinese Municipal Solid Wastes (FMSW and CMSW). Taking into account molecular biology analysis reported earlier three groups of methanogens including strictly hydrogenotrophic methanogens, strictly aceticlastic methanogens (Methanosaeta sp.) and Methanosarcina sp., consuming both acetate and H(2)/H(2)CO(3) were considered in the model. The total organic and inorganic carbon concentrations, methane production volume, methane and carbon dioxide partial pressures values were used for the model calibration and validation. Methane isotopic composition (delta (13)CH(4)) evolution during the incubations was used to independently validate the model results. The model demonstrated that only the putrescible solid waste was totally converted to methane.

  2. Improvement of Basic Fluid Dynamics Models for the COMPASS Code

    Science.gov (United States)

    Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi

    The COMPASS code is a new next generation safety analysis code to provide local information for various key phenomena in core disruptive accidents of sodium-cooled fast reactors, which is based on the moving particle semi-implicit (MPS) method. In this study, improvement of basic fluid dynamics models for the COMPASS code was carried out and verified with fundamental verification calculations. A fully implicit pressure solution algorithm was introduced to improve the numerical stability of MPS simulations. With a newly developed free surface model, numerical difficulty caused by poor pressure solutions is overcome by involving free surface particles in the pressure Poisson equation. In addition, applicability of the MPS method to interactions between fluid and multi-solid bodies was investigated in comparison with dam-break experiments with solid balls. It was found that the PISO algorithm and free surface model makes simulation with the passively moving solid model stable numerically. The characteristic behavior of solid balls was successfully reproduced by the present numerical simulations.

  3. Improved Modeling in a Matlab-Based Navigation System

    Science.gov (United States)

    Deutschmann, Julie; Bar-Itzhack, Itzhack; Harman, Rick; Larimore, Wallace E.

    1999-01-01

    An innovative approach to autonomous navigation is available for low earth orbit satellites. The system is developed in Matlab and utilizes an Extended Kalman Filter (EKF) to estimate the attitude and trajectory based on spacecraft magnetometer and gyro data. Preliminary tests of the system with real spacecraft data from the Rossi X-Ray Timing Explorer Satellite (RXTE) indicate the existence of unmodeled errors in the magnetometer data. Incorporating into the EKF a statistical model that describes the colored component of the effective measurement of the magnetic field vector could improve the accuracy of the trajectory and attitude estimates and also improve the convergence time. This model is identified as a first order Markov process. With the addition of the model, the EKF attempts to identify the non-white components of the noise allowing for more accurate estimation of the original state vector, i.e. the orbital elements and the attitude. Working in Matlab allows for easy incorporation of new models into the EKF and the resulting navigation system is generic and can easily be applied to future missions resulting in an alternative in onboard or ground-based navigation.

  4. Managing health care decisions and improvement through simulation modeling.

    Science.gov (United States)

    Forsberg, Helena Hvitfeldt; Aronsson, Håkan; Keller, Christina; Lindblad, Staffan

    2011-01-01

    Simulation modeling is a way to test changes in a computerized environment to give ideas for improvements before implementation. This article reviews research literature on simulation modeling as support for health care decision making. The aim is to investigate the experience and potential value of such decision support and quality of articles retrieved. A literature search was conducted, and the selection criteria yielded 59 articles derived from diverse applications and methods. Most met the stated research-quality criteria. This review identified how simulation can facilitate decision making and that it may induce learning. Furthermore, simulation offers immediate feedback about proposed changes, allows analysis of scenarios, and promotes communication on building a shared system view and understanding of how a complex system works. However, only 14 of the 59 articles reported on implementation experiences, including how decision making was supported. On the basis of these articles, we proposed steps essential for the success of simulation projects, not just in the computer, but also in clinical reality. We also presented a novel concept combining simulation modeling with the established plan-do-study-act cycle for improvement. Future scientific inquiries concerning implementation, impact, and the value for health care management are needed to realize the full potential of simulation modeling.

  5. Improving NASA's Multiscale Modeling Framework for Tropical Cyclone Climate Study

    Science.gov (United States)

    Shen, Bo-Wen; Nelson, Bron; Cheung, Samson; Tao, Wei-Kuo

    2013-01-01

    One of the current challenges in tropical cyclone (TC) research is how to improve our understanding of TC interannual variability and the impact of climate change on TCs. Recent advances in global modeling, visualization, and supercomputing technologies at NASA show potential for such studies. In this article, the authors discuss recent scalability improvement to the multiscale modeling framework (MMF) that makes it feasible to perform long-term TC-resolving simulations. The MMF consists of the finite-volume general circulation model (fvGCM), supplemented by a copy of the Goddard cumulus ensemble model (GCE) at each of the fvGCM grid points, giving 13,104 GCE copies. The original fvGCM implementation has a 1D data decomposition; the revised MMF implementation retains the 1D decomposition for most of the code, but uses a 2D decomposition for the massive copies of GCEs. Because the vast majority of computation time in the MMF is spent computing the GCEs, this approach can achieve excellent speedup without incurring the cost of modifying the entire code. Intelligent process mapping allows differing numbers of processes to be assigned to each domain for load balancing. The revised parallel implementation shows highly promising scalability, obtaining a nearly 80-fold speedup by increasing the number of cores from 30 to 3,335.

  6. Model-data integration to improve the LPJmL dynamic global vegetation model

    Science.gov (United States)

    Forkel, Matthias; Thonicke, Kirsten; Schaphoff, Sibyll; Thurner, Martin; von Bloh, Werner; Dorigo, Wouter; Carvalhais, Nuno

    2017-04-01

    Dynamic global vegetation models show large uncertainties regarding the development of the land carbon balance under future climate change conditions. This uncertainty is partly caused by differences in how vegetation carbon turnover is represented in global vegetation models. Model-data integration approaches might help to systematically assess and improve model performances and thus to potentially reduce the uncertainty in terrestrial vegetation responses under future climate change. Here we present several applications of model-data integration with the LPJmL (Lund-Potsdam-Jena managed Lands) dynamic global vegetation model to systematically improve the representation of processes or to estimate model parameters. In a first application, we used global satellite-derived datasets of FAPAR (fraction of absorbed photosynthetic activity), albedo and gross primary production to estimate phenology- and productivity-related model parameters using a genetic optimization algorithm. Thereby we identified major limitations of the phenology module and implemented an alternative empirical phenology model. The new phenology module and optimized model parameters resulted in a better performance of LPJmL in representing global spatial patterns of biomass, tree cover, and the temporal dynamic of atmospheric CO2. Therefore, we used in a second application additionally global datasets of biomass and land cover to estimate model parameters that control vegetation establishment and mortality. The results demonstrate the ability to improve simulations of vegetation dynamics but also highlight the need to improve the representation of mortality processes in dynamic global vegetation models. In a third application, we used multiple site-level observations of ecosystem carbon and water exchange, biomass and soil organic carbon to jointly estimate various model parameters that control ecosystem dynamics. This exercise demonstrates the strong role of individual data streams on the

  7. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit

    2015-04-16

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  8. Life course models: improving interpretation by consideration of total effects.

    Science.gov (United States)

    Green, Michael J; Popham, Frank

    2016-12-28

    Life course epidemiology has used models of accumulation and critical or sensitive periods to examine the importance of exposure timing in disease aetiology. These models are usually used to describe the direct effects of exposures over the life course. In comparison with consideration of direct effects only, we show how consideration of total effects improves interpretation of these models, giving clearer notions of when it will be most effective to intervene. We show how life course variation in the total effects depends on the magnitude of the direct effects and the stability of the exposure. We discuss interpretation in terms of total, direct and indirect effects and highlight the causal assumptions required for conclusions as to the most effective timing of interventions.

  9. An Improved Algorithm of Semantic Similarity Computing Based on Path%一种改进的基于路径的语义相似度计算算法

    Institute of Scientific and Technical Information of China (English)

    曾诚; 韩光辉; 李兵; 朱子龙

    2011-01-01

    在概念之间的相似程度计算算法中,基于路径的语义相似度算法扮演着重要的角色.首先分析常用的几种基于路径的相似度计算算法,然后针对Wu和Palmer算法中存在的两个缺陷,提出了一种改进算法.从整体上来讲,这种算法的改进较为直观,容易实现,算法时间复杂度和Wu和Palmer算法类似.%In the kinds of similarity computing algorithms between concepts,path-based semantic similarity algorithm plays an important role.In this paper several path-based similarity computation algorithms are first introduced,and then an improved algorithm is provided in order to overcome the two defects in Wu and Palmer algorithms.General speaking,this improved algorithm is comparatively intuitive and easy to implement,and it's time complexity is similar to Wu and Palmer.

  10. Thermal Modeling Method Improvements for SAGE III on ISS

    Science.gov (United States)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was

  11. Comparison of Modeling and Experimental Approaches for Improved Modeling of Filtration in Granular and Consolidated Media

    Science.gov (United States)

    Mirabolghasemi, M.; Prodanovic, M.; DiCarlo, D. A.

    2014-12-01

    Filtration is relevant to many disciplines from colloid transport in environmental engineering to formation damage in petroleum engineering. In this study we compare the results of the novel numerical modeling of filtration phenomenon on pore scale with the complementary experimental observations on laboratory scale and discuss how the results of comparison can be used to improve macroscale filtration models for different porous media. The water suspension contained glass beads of 200 micron diameter and flows through a packing of 1mm diameter glass beads, and thus the main filtration mechanism is straining and jamming of particles. The numerical model simulates the flow of suspension through a realistic 3D structure of an imaged, disordered sphere pack, which acts as the filter medium. Particle capture through size exclusion and jamming is modeled via a coupled Discrete Element Method (DEM) and Computational Fluid Dynamics (CFD) approach. The coupled CFD-DEM approach is capable of modeling the majority of particle-particle, particle-wall, and particle-fluid interactions. Note that most of traditional approaches require spherical particles both in suspension and the filtration medium. We adapted the interface between the pore space and the spherical grains to be represented as a triangulated surface and this allows extensions to any imaged media. The numerical and experimental results show that the filtration coefficient of the sphere pack is a function of the flow rate and concentration of the suspension, even for constant total particle flow rate. An increase in the suspension flow rate results in a decrease in the filtration coefficient, which suggests that the hydrodynamic drag force plays the key role in hindering the particle capture in random sphere packs. Further, similar simulations of suspension flow through a sandstone sample, which has a tighter pore space, show that filtration coefficient remains almost constant at different suspension flow rates. This

  12. Educators and students prefer traditional clinical education to a peer-assisted learning model, despite similar student performance outcomes: a randomised trial

    Directory of Open Access Journals (Sweden)

    Samantha Sevenhuysen

    2014-12-01

    Full Text Available Question: What is the efficacy and acceptability of a peer-assisted learning model compared with a traditional model for paired students in physiotherapy clinical education? Design: Prospective, assessor-blinded, randomised crossover trial. Participants: Twenty-four physiotherapy students in the third year of a 4-year undergraduate degree. Intervention: Participants each completed 5 weeks of clinical placement, utilising a peer-assisted learning model (a standardised series of learning activities undertaken by student pairs and educators to facilitate peer interaction using guided strategies and a traditional model (usual clinical supervision and learning activities led by clinical educators supervising pairs of students. Outcome measures: The primary outcome measure was student performance, rated on the Assessment of Physiotherapy Practice by a blinded assessor, the supervising clinical educator and by the student in self-assessment. Secondary outcome measures were satisfaction with the teaching and learning experience measured via survey, and statistics on services delivered. Results: There were no significant between-group differences in Assessment of Physiotherapy Practice scores as rated by the blinded assessor (p = 0.43, the supervising clinical educator (p = 0.94 or the students (p = 0.99. In peer-assisted learning, clinical educators had an extra 6 minutes/day available for non-student-related quality activities (95% CI 1 to 10 and students received an additional 0.33 entries/day of written feedback from their educator (95% CI 0.06 to 0.61. Clinical educator satisfaction and student satisfaction were higher with the traditional model. Conclusion: The peer-assisted learning model trialled in the present study produced similar student performance outcomes when compared with a traditional approach. Peer-assisted learning provided some benefits to educator workload and student feedback, but both educators and students were more

  13. Crop Model Improvement Reduces the Uncertainty of the Response to Temperature of Multi-Model Ensembles

    Science.gov (United States)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli

    2016-01-01

    To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.

  14. Peer Assessment with Online Tools to Improve Student Modeling

    Science.gov (United States)

    Atkins, Leslie J.

    2012-11-01

    Introductory physics courses often require students to develop precise models of phenomena and represent these with diagrams, including free-body diagrams, light-ray diagrams, and maps of field lines. Instructors expect that students will adopt a certain rigor and precision when constructing these diagrams, but we want that rigor and precision to be an aid to sense-making rather than meeting seemingly arbitrary requirements set by the instructor. By giving students the authority to develop their own models and establish requirements for their diagrams, the sense that these are arbitrary requirements diminishes and students are more likely to see modeling as a sense-making activity. The practice of peer assessment can help students take ownership; however, it can be difficult for instructors to manage. Furthermore, it is not without risk: students can be reluctant to critique their peers, they may view this as the job of the instructor, and there is no guarantee that students will employ greater rigor and precision as a result of peer assessment. In this article, we describe one approach for peer assessment that can establish norms for diagrams in a way that is student driven, where students retain agency and authority in assessing and improving their work. We show that such an approach does indeed improve students' diagrams and abilities to assess their own work, without sacrificing students' authority and agency.

  15. Use of a business excellence model to improve conservation programs.

    Science.gov (United States)

    Black, Simon; Groombridge, Jim

    2010-12-01

    The current shortfall in effectiveness within conservation biology is illustrated by increasing interest in "evidence-based conservation," whose proponents have identified the need to benchmark conservation initiatives against actions that lead to proven positive effects. The effectiveness of conservation policies, approaches, and evaluation is under increasing scrutiny, and in these areas models of excellence used in business could prove valuable. Typically, conservation programs require years of effort and involve rigorous long-term implementation processes. Successful balance of long-term efforts alongside the achievement of short-term goals is often compromised by management or budgetary constraints, a situation also common in commercial businesses. "Business excellence" is an approach many companies have used over the past 20 years to ensure continued success. Various business excellence evaluations have been promoted that include concepts that could be adapted and applied in conservation programs. We describe a conservation excellence model that shows how scientific processes and results can be aligned with financial and organizational measures of success. We applied the model to two well-documented species conservation programs. In the first, the Po'ouli program, several aspects of improvement were identified, such as more authority for decision making in the field and better integration of habitat management and population recovery processes. The second example, the black-footed ferret program, could have benefited from leadership effort to reduce bureaucracy and to encourage use of best-practice species recovery approaches. The conservation excellence model enables greater clarity in goal setting, more-effective identification of job roles within programs, better links between technical approaches and measures of biological success, and more-effective use of resources. The model could improve evaluation of a conservation program's effectiveness and may be

  16. Implications of improved Higgs mass calculations for supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, O. [Imperial College, London (United Kingdom). High Energy Physics Group; Dolan, M.J. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Theory Group; Ellis, J. [King' s College, London (United Kingdom). Theoretical Particle Physics and Cosmology Group; and others

    2014-03-15

    We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, M{sub h}, in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyze the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of BR(B{sub s}→μ{sup +}μ{sup -}) and ATLAS searches for E{sub T} events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours tan βsimilar 10, though not in the NUHM1 or NUHM2.

  17. Improving the Evaluation Model for the Lithuanian Informatics Olympiads

    Directory of Open Access Journals (Sweden)

    Jūratė SKŪPIENĖ

    2010-04-01

    Full Text Available The Lithuanian Informatics Olympiads (LitIO is a problem solving programming contest for students in secondary education. The work of the student to be evaluated is an algorithm designed by the student and implemented as a working program. The current evaluation process involves both automated (for correctness and performance of programs with the given input data and manual (for programming style, written motivation of an algorithm grading. However, it is based on tradition and has not been scientifically discussed and motivated. To create an improved and motivated evaluation model, we put together a questionnaire and asked a group of foreign and Lithuanian experts having experience in various informatics contests to respond. We identified two basic directions in the suggested evaluation models and made a choice based on the goals of LitIO. While designing the model in the paper, we reflected on the suggestions and opinions of the experts as much as possible, even if they were not included into the proposed model. The paper presents the final outcome of this work, the proposed evaluation model for the Lithuanian Informatics Olympiads.

  18. Improvement of airfoil trailing edge bluntness noise model

    Directory of Open Access Journals (Sweden)

    Wei Jun Zhu

    2016-02-01

    Full Text Available In this article, airfoil trailing edge bluntness noise is investigated using both computational aero-acoustic and semi-empirical approach. For engineering purposes, one of the most commonly used prediction tools for trailing edge noise are based on semi-empirical approaches, for example, the Brooks, Pope, and Marcolini airfoil noise prediction model developed by Brooks, Pope, and Marcolini (NASA Reference Publication 1218, 1989. It was found in previous study that the Brooks, Pope, and Marcolini model tends to over-predict noise at high frequencies. Furthermore, it was observed that this was caused by a lack in the model to predict accurately noise from blunt trailing edges. For more physical understanding of bluntness noise generation, in this study, we also use an advanced in-house developed high-order computational aero-acoustic technique to investigate the details associated with trailing edge bluntness noise. The results from the numerical model form the basis for an improved Brooks, Pope, and Marcolini trailing edge bluntness noise model.

  19. Improved Dynamic Modeling of the Cascade Distillation Subsystem and Integration with Models of Other Water Recovery Subsystems

    Science.gov (United States)

    Perry, Bruce; Anderson, Molly

    2015-01-01

    The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.

  20. Improving predictive mapping of deep-water habitats: Considering multiple model outputs and ensemble techniques

    Science.gov (United States)

    Robert, Katleen; Jones, Daniel O. B.; Roberts, J. Murray; Huvenne, Veerle A. I.

    2016-07-01

    In the deep sea, biological data are often sparse; hence models capturing relationships between observed fauna and environmental variables (acquired via acoustic mapping techniques) are often used to produce full coverage species assemblage maps. Many statistical modelling techniques are being developed, but there remains a need to determine the most appropriate mapping techniques. Predictive habitat modelling approaches (redundancy analysis, maximum entropy and random forest) were applied to a heterogeneous section of seabed on Rockall Bank, NE Atlantic, for which landscape indices describing the spatial arrangement of habitat patches were calculated. The predictive maps were based on remotely operated vehicle (ROV) imagery transects high-resolution autonomous underwater vehicle (AUV) sidescan backscatter maps. Area under the curve (AUC) and accuracy indicated similar performances for the three models tested, but performance varied by species assemblage, with the transitional species assemblage showing the weakest predictive performances. Spatial predictions of habitat suitability differed between statistical approaches, but niche similarity metrics showed redundancy analysis and random forest predictions to be most similar. As one statistical technique could not be found to outperform the others when all assemblages were considered, ensemble mapping techniques, where the outputs of many models are combined, were applied. They showed higher accuracy than any single model. Different statistical approaches for predictive habitat modelling possess varied strengths and weaknesses and by examining the outputs of a range of modelling techniques and their differences, more robust predictions, with better described variation and areas of uncertainties, can be achieved. As improvements to prediction outputs can be achieved without additional costly data collection, ensemble mapping approaches have clear value for spatial management.