Sample records for model similar improvements

  1. Modeling of similar economies

    Directory of Open Access Journals (Sweden)

    Sergey B. Kuznetsov


    Full Text Available Objective to obtain dimensionless criteria ndash economic indices characterizing the national economy and not depending on its size. Methods mathematical modeling theory of dimensions processing statistical data. Results basing on differential equations describing the national economy with the account of economical environment resistance two dimensionless criteria are obtained which allow to compare economies regardless of their sizes. With the theory of dimensions we show that the obtained indices are not accidental. We demonstrate the implementation of the obtained dimensionless criteria for the analysis of behavior of certain countriesrsquo economies. Scientific novelty the dimensionless criteria are obtained ndash economic indices which allow to compare economies regardless of their sizes and to analyze the dynamic changes in the economies with time. nbsp Practical significance the obtained results can be used for dynamic and comparative analysis of different countriesrsquo economies regardless of their sizes.

  2. Continuous Improvement and Collaborative Improvement: Similarities and Differences

    DEFF Research Database (Denmark)

    Middel, Rick; Boer, Harry; Fisscher, Olaf


    A substantial body of theoretical and practical knowledge has been developed on continuous improvement. However, there is still a considerable lack of impirically grounded contributions and theories on collaborative improvement, that is, continuous improvement in an interorganizational setting....... The CO-IMPROVE project investigated whether and how the concept of continuous improvement can be extended and transferred to such settings. The objective of this article is ti evaluate the CO-IMPROVE research findings in view of existing theories on continuous innovation. The article investigates...... the similarities and differences between key components of continuous and collaborative improvement by assessing what is specific for continuous improvement, what for collaborative improvement, and where the two areas of application meet and overlap. The main conclusions are that there are many more similarities...

  3. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar


    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  4. Multimodal Similarity Gaussian Process Latent Variable Model. (United States)

    Song, Guoli; Wang, Shuhui; Huang, Qingming; Tian, Qi


    Data from real applications involve multiple modalities representing content with the same semantics from complementary aspects. However, relations among heterogeneous modalities are simply treated as observation-to-fit by existing work, and the parameterized modality specific mapping functions lack flexibility in directly adapting to the content divergence and semantic complicacy in multimodal data. In this paper, we build our work based on the Gaussian process latent variable model (GPLVM) to learn the non-parametric mapping functions and transform heterogeneous modalities into a shared latent space. We propose multimodal Similarity Gaussian Process latent variable model (m-SimGP), which learns the mapping functions between the intra-modal similarities and latent representation. We further propose multimodal distance-preserved similarity GPLVM (m-DSimGP) to preserve the intra-modal global similarity structure, and multimodal regularized similarity GPLVM (m-RSimGP) by encouraging similar/dissimilar points to be similar/dissimilar in the latent space. We propose m-DRSimGP, which combines the distance preservation in m-DSimGP and semantic preservation in m-RSimGP to learn the latent representation. The overall objective functions of the four models are solved by simple and scalable gradient decent techniques. They can be applied to various tasks to discover the nonlinear correlations and to obtain the comparable low-dimensional representation for heterogeneous modalities. On five widely used real-world data sets, our approaches outperform existing models on cross-modal content retrieval and multimodal classification.

  5. Continuous Improvement and Collaborative Improvement: Similarities and Differences

    NARCIS (Netherlands)

    Middel, H.G.A.; Boer, Harm; Fisscher, O.A.M.


    A substantial body of theoretical and practical knowledge has been developed on continuous improvement. However, there is still a considerable lack of empirically grounded contributions and theories on collaborative improvement, that is, continuous improvement in an inter-organizational setting. The

  6. The similarity principle - on using models correctly

    DEFF Research Database (Denmark)

    Landberg, L.; Mortensen, N.G.; Rathmann, O.


    This paper will present some guiding principles on the most accurate use of the WAsP program in particular, but the principle can be applied to the use of any linear model which predicts some quantity at one location based on another. We have felt a need to lay out these principles out explicitly......, due to the many, many users and the uses (and misuses) of the WAsP program. Put simply, the similarity principle states that one should chose a predictor site which – in as many ways as possible – is similar to the predicted site....

  7. Similarity metrics for surgical process models. (United States)

    Neumuth, Thomas; Loebe, Frank; Jannin, Pierre


    The objective of this work is to introduce a set of similarity metrics for comparing surgical process models (SPMs). SPMs are progression models of surgical interventions that support quantitative analyses of surgical activities, supporting systems engineering or process optimization. Five different similarity metrics are presented and proven. These metrics deal with several dimensions of process compliance in surgery, including granularity, content, time, order, and frequency of surgical activities. The metrics were experimentally validated using 20 clinical data sets each for cataract interventions, craniotomy interventions, and supratentorial tumor resections. The clinical data sets were controllably modified in simulations, which were iterated ten times, resulting in a total of 600 simulated data sets. The simulated data sets were subsequently compared to the original data sets to empirically assess the predictive validity of the metrics. We show that the results of the metrics for the surgical process models correlate significantly (pmetrics meet predictive validity. The clinical use of the metrics was exemplarily, as demonstrated by assessment of the learning curves of observers during surgical process model acquisition. Measuring similarity between surgical processes is a complex task. However, metrics for computing the similarity between surgical process models are needed in many uses in the field of medical engineering. These metrics are essential whenever two SPMs need to be compared, such as during the evaluation of technical systems, the education of observers, or the determination of surgical strategies. These metrics are key figures that provide a solid base for medical decisions, such as during validation of sensor systems for use in operating rooms in the future. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Improved collaborative filtering recommendation algorithm of similarity measure (United States)

    Zhang, Baofu; Yuan, Baoping


    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  9. Modeling Timbre Similarity of Short Music Clips. (United States)

    Siedenburg, Kai; Müllensiefen, Daniel


    There is evidence from a number of recent studies that most listeners are able to extract information related to song identity, emotion, or genre from music excerpts with durations in the range of tenths of seconds. Because of these very short durations, timbre as a multifaceted auditory attribute appears as a plausible candidate for the type of features that listeners make use of when processing short music excerpts. However, the importance of timbre in listening tasks that involve short excerpts has not yet been demonstrated empirically. Hence, the goal of this study was to develop a method that allows to explore to what degree similarity judgments of short music clips can be modeled with low-level acoustic features related to timbre. We utilized the similarity data from two large samples of participants: Sample I was obtained via an online survey, used 16 clips of 400 ms length, and contained responses of 137,339 participants. Sample II was collected in a lab environment, used 16 clips of 800 ms length, and contained responses from 648 participants. Our model used two sets of audio features which included commonly used timbre descriptors and the well-known Mel-frequency cepstral coefficients as well as their temporal derivates. In order to predict pairwise similarities, the resulting distances between clips in terms of their audio features were used as predictor variables with partial least-squares regression. We found that a sparse selection of three to seven features from both descriptor sets-mainly encoding the coarse shape of the spectrum as well as spectrotemporal variability-best predicted similarities across the two sets of sounds. Notably, the inclusion of non-acoustic predictors of musical genre and record release date allowed much better generalization performance and explained up to 50% of shared variance ( R 2 ) between observations and model predictions. Overall, the results of this study empirically demonstrate that both acoustic features related

  10. Inter Genre Similarity Modelling For Automatic Music Genre Classification


    Bagci, Ulas; Erzin, Engin


    Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population....

  11. Quasi-Similarity Model of Synthetic Jets

    Czech Academy of Sciences Publication Activity Database

    Tesař, Václav; Kordík, Jozef


    Roč. 149, č. 2 (2009), s. 255-265 ISSN 0924-4247 R&D Projects: GA AV ČR IAA200760705; GA ČR GA101/07/1499 Institutional research plan: CEZ:AV0Z20760514 Keywords : jets * synthetic jets * similarity solution Subject RIV: BK - Fluid Dynamics Impact factor: 1.674, year: 2009

  12. MAC/FAC: A Model of Similarity-Based Retrieval. (United States)

    Forbus, Kenneth D.; And Others


    Presents MAC/FAC, a model of similarity-based retrieval that attempts to capture psychological phenomena; discusses its limitations and extensions, its relationship with other retrieval models, and its placement in the context of other work on the nature of similarity. Examines the utility of the model through psychological experiments and…

  13. Modeling of Hysteresis in Piezoelectric Actuator Based on Segment Similarity

    Directory of Open Access Journals (Sweden)

    Rui Xiong


    Full Text Available To successfully exploit the full potential of piezoelectric actuators in micro/nano positioning systems, it is essential to model their hysteresis behavior accurately. A novel hysteresis model for piezoelectric actuator is proposed in this paper. Firstly, segment-similarity, which describes the similarity relationship between hysteresis curve segments with different turning points, is proposed. Time-scale similarity, which describes the similarity relationship between hysteresis curves with different rates, is used to solve the problem of dynamic effect. The proposed model is formulated using these similarities. Finally, the experiments are performed with respect to a micro/nano-meter movement platform system. The effectiveness of the proposed model is verified as compared with the Preisach model. The experimental results show that the proposed model is able to precisely predict the hysteresis trajectories of piezoelectric actuators and performs better than the Preisach model.

  14. Agile rediscovering values: Similarities to continuous improvement strategies (United States)

    Díaz de Mera, P.; Arenas, J. M.; González, C.


    Research in the late 80's on technological companies that develop products of high value innovation, with sufficient speed and flexibility to adapt quickly to changing market conditions, gave rise to the new set of methodologies known as Agile Management Approach. In the current changing economic scenario, we considered very interesting to study the similarities of these Agile Methodologies with other practices whose effectiveness has been amply demonstrated in both the West and Japan. Strategies such as Kaizen, Lean, World Class Manufacturing, Concurrent Engineering, etc, would be analyzed to check the values they have in common with the Agile Approach.

  15. Simulation and similarity using models to understand the world

    CERN Document Server

    Weisberg, Michael


    In the 1950s, John Reber convinced many Californians that the best way to solve the state's water shortage problem was to dam up the San Francisco Bay. Against massive political pressure, Reber's opponents persuaded lawmakers that doing so would lead to disaster. They did this not by empirical measurement alone, but also through the construction of a model. Simulation and Similarity explains why this was a good strategy while simultaneously providing an account of modeling and idealization in modern scientific practice. Michael Weisberg focuses on concrete, mathematical, and computational models in his consideration of the nature of models, the practice of modeling, and nature of the relationship between models and real-world phenomena. In addition to a careful analysis of physical, computational, and mathematical models, Simulation and Similarity offers a novel account of the model/world relationship. Breaking with the dominant tradition, which favors the analysis of this relation through logical notions suc...

  16. Improving protein structure similarity searches using domain boundaries based on conserved sequence information

    Directory of Open Access Journals (Sweden)

    Madej Tom


    Full Text Available Abstract Background The identification of protein domains plays an important role in protein structure comparison. Domain query size and composition are critical to structure similarity search algorithms such as the Vector Alignment Search Tool (VAST, the method employed for computing related protein structures in NCBI Entrez system. Currently, domains identified on the basis of structural compactness are used for VAST computations. In this study, we have investigated how alternative definitions of domains derived from conserved sequence alignments in the Conserved Domain Database (CDD would affect the domain comparisons and structure similarity search performance of VAST. Results Alternative domains, which have significantly different secondary structure composition from those based on structurally compact units, were identified based on the alignment footprints of curated protein sequence domain families. Our analysis indicates that domain boundaries disagree on roughly 8% of protein chains in the medium redundancy subset of the Molecular Modeling Database (MMDB. These conflicting sequence based domain boundaries perform slightly better than structure domains in structure similarity searches, and there are interesting cases when structure similarity search performance is markedly improved. Conclusion Structure similarity searches using domain boundaries based on conserved sequence information can provide an additional method for investigators to identify interesting similarities between proteins with known structures. Because of the improvement in performance of structure similarity searches using sequence domain boundaries, we are in the process of implementing their inclusion into the VAST search and MMDB resources in the NCBI Entrez system.

  17. Perception of similarity: a model for social network dynamics

    International Nuclear Information System (INIS)

    Javarone, Marco Alberto; Armano, Giuliano


    Some properties of social networks (e.g., the mixing patterns and the community structure) appear deeply influenced by the individual perception of people. In this work we map behaviors by considering similarity and popularity of people, also assuming that each person has his/her proper perception and interpretation of similarity. Although investigated in different ways (depending on the specific scientific framework), from a computational perspective similarity is typically calculated as a distance measure. In accordance with this view, to represent social network dynamics we developed an agent-based model on top of a hyperbolic space on which individual distance measures are calculated. Simulations, performed in accordance with the proposed model, generate small-world networks that exhibit a community structure. We deem this model to be valuable for analyzing the relevant properties of real social networks. (paper)

  18. Opinion Dynamics of Social-Similarity-Based Hegselmann–Krause Model

    Directory of Open Access Journals (Sweden)

    Xi Chen


    Full Text Available The existing opinion dynamics models mainly concentrate on the impact of opinions on other opinions and ignore the effect of the social similarity between individuals. Social similarity between an individual and their neighbors will also affect their opinions in real life. Therefore, an opinion evolution model considering social similarity (social-similarity-based HK model, SSHK model for short is introduced in this paper. Social similarity is calculated using individual properties and is used to measure the social relationship between individuals. By considering the joint effect of confidence bounds and social similarity in this model, the role of neighbors’ selection is changed significantly in the process of the evolution of opinions. Numerical results demonstrate that the new model can not only obtain the salient features of the opinion result, namely, fragmentation, polarization, and consensus, but also achieve consensus more easily under the appropriate similarity threshold. In addition, the improved model with heterogeneous and homogeneous confidence bounds and similarity thresholds are also discussed. We found that the improved heterogeneous SSHK model could acquire opinion consensus results more easily than the homogeneous SSHK model and the classical models when the confidence bound was related to the similarity threshold. This finding provides a new way of thinking and a theoretical basis for the guidance of public opinion in real life.

  19. Bianchi VI0 and III models: self-similar approach

    International Nuclear Information System (INIS)

    Belinchon, Jose Antonio


    We study several cosmological models with Bianchi VI 0 and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and Λ. As in other studied models we find that the behaviour of G and Λ are related. If G behaves as a growing time function then Λ is a positive decreasing time function but if G is decreasing then Λ 0 is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  20. Semantic Similarity, Predictability, and Models of Sentence Processing (United States)

    Roland, Douglas; Yun, Hongoak; Koenig, Jean-Pierre; Mauner, Gail


    The effects of word predictability and shared semantic similarity between a target word and other words that could have taken its place in a sentence on language comprehension are investigated using data from a reading time study, a sentence completion study, and linear mixed-effects regression modeling. We find that processing is facilitated if…

  1. Morphological similarities between DBM and a microeconomic model of sprawl (United States)

    Caruso, Geoffrey; Vuidel, Gilles; Cavailhès, Jean; Frankhauser, Pierre; Peeters, Dominique; Thomas, Isabelle


    We present a model that simulates the growth of a metropolitan area on a 2D lattice. The model is dynamic and based on microeconomics. Households show preferences for nearby open spaces and neighbourhood density. They compete on the land market. They travel along a road network to access the CBD. A planner ensures the connectedness and maintenance of the road network. The spatial pattern of houses, green spaces and road network self-organises, emerging from agents individualistic decisions. We perform several simulations and vary residential preferences. Our results show morphologies and transition phases that are similar to Dieletric Breakdown Models (DBM). Such similarities were observed earlier by other authors, but we show here that it can be deducted from the functioning of the land market and thus explicitly connected to urban economic theory.

  2. Exploring information from the topology beneath the Gene Ontology terms to improve semantic similarity measures. (United States)

    Zhang, Shu-Bo; Lai, Jian-Huang


    Measuring the similarity between pairs of biological entities is important in molecular biology. The introduction of Gene Ontology (GO) provides us with a promising approach to quantifying the semantic similarity between two genes or gene products. This kind of similarity measure is closely associated with the GO terms annotated to biological entities under consideration and the structure of the GO graph. However, previous works in this field mainly focused on the upper part of the graph, and seldom concerned about the lower part. In this study, we aim to explore information from the lower part of the GO graph for better semantic similarity. We proposed a framework to quantify the similarity measure beneath a term pair, which takes into account both the information two ancestral terms share and the probability that they co-occur with their common descendants. The effectiveness of our approach was evaluated against seven typical measurements on public platform CESSM, protein-protein interaction and gene expression datasets. Experimental results consistently show that the similarity derived from the lower part contributes to better semantic similarity measure. The promising features of our approach are the following: (1) it provides a mirror model to characterize the information two ancestral terms share with respect to their common descendant; (2) it quantifies the probability that two terms co-occur with their common descendant in an efficient way; and (3) our framework can effectively capture the similarity measure beneath two terms, which can serve as an add-on to improve traditional semantic similarity measure between two GO terms. The algorithm was implemented in Matlab and is freely available from Copyright © 2016 Elsevier B.V. All rights reserved.

  3. 3D Pharmacophoric Similarity improves Multi Adverse Drug Event Identification in Pharmacovigilance (United States)

    Vilar, Santiago; Tatonetti, Nicholas P.; Hripcsak, George


    Adverse drugs events (ADEs) detection constitutes a considerable concern in patient safety and public health care. For this reason, it is important to develop methods that improve ADE signal detection in pharmacovigilance databases. Our objective is to apply 3D pharmacophoric similarity models to enhance ADE recognition in Offsides, a pharmacovigilance resource with drug-ADE associations extracted from the FDA Adverse Event Reporting System (FAERS). We developed a multi-ADE predictor implementing 3D drug similarity based on a pharmacophoric approach, with an ADE reference standard extracted from the SIDER database. The results showed that the application of our 3D multi-type ADE predictor to the pharmacovigilance data in Offsides improved ADE identification and generated enriched sets of drug-ADE signals. The global ROC curve for the Offsides ADE candidates ranked with the 3D similarity score showed an area of 0.7. The 3D predictor also allows the identification of the most similar drug that causes the ADE under study, which could provide hypotheses about mechanisms of action and ADE etiology. Our method is useful in drug development, screening potential adverse effects in experimental drugs, and in drug safety, applicable to the evaluation of ADE signals selected through pharmacovigilance data mining.


    Directory of Open Access Journals (Sweden)

    Mikhail A. Goncharov


    Full Text Available The publication is aimed at comparing concepts of V.V. Belousov and M.V. Gzovsky, outstanding researchers who established fundamentals of tectonophysics in Russia, specifically similarity conditions in application to tectonophysical modeling. Quotations from their publications illustrate differences in their views. In this respect, we can reckon V.V. Belousov as a «realist» as he supported «the liberal point of view» [Methods of modelling…, 1988, p. 21–22], whereas M.V. Gzovsky can be regarded as an «idealist» as he believed that similarity conditions should be mandatorily applied to ensure correctness of physical modeling of tectonic deformations and structures [Gzovsky, 1975, pp. 88 and 94].Objectives of the present publication are (1 to be another reminder about desirability of compliance with similarity conditions in experimental tectonics; (2 to point out difficulties in ensuring such compliance; (3 to give examples which bring out the fact that similarity conditions are often met per se, i.e. automatically observed; (4 to show that modeling can be simplified in some cases without compromising quantitative estimations of parameters of structure formation.(1 Physical modelling of tectonic deformations and structures should be conducted, if possible, in compliance with conditions of geometric and physical similarity between experimental models and corresponding natural objects. In any case, a researcher should have a clear vision of conditions applicable to each particular experiment.(2 Application of similarity conditions is often challenging due to unavoidable difficulties caused by the following: a Imperfection of experimental equipment and technologies (Fig. 1 to 3; b uncertainties in estimating parameters of formation of natural structures, including main ones: structure size (Fig. 4, time of formation (Fig. 5, deformation properties of the medium wherein such structures are formed, including, first of all, viscosity (Fig. 6

  5. The continuous similarity model of bulk soil-water evaporation (United States)

    Clapp, R. B.


    The continuous similarity model of evaporation is described. In it, evaporation is conceptualized as a two stage process. For an initially moist soil, evaporation is first climate limited, but later it becomes soil limited. During the latter stage, the evaporation rate is termed evaporability, and mathematically it is inversely proportional to the evaporation deficit. A functional approximation of the moisture distribution within the soil column is also included in the model. The model was tested using data from four experiments conducted near Phoenix, Arizona; and there was excellent agreement between the simulated and observed evaporation. The model also predicted the time of transition to the soil limited stage reasonably well. For one of the experiments, a third stage of evaporation, when vapor diffusion predominates, was observed. The occurrence of this stage was related to the decrease in moisture at the surface of the soil. The continuous similarity model does not account for vapor flow. The results show that climate, through the potential evaporation rate, has a strong influence on the time of transition to the soil limited stage. After this transition, however, bulk evaporation is independent of climate until the effects of vapor flow within the soil predominate.

  6. Self-similar two-particle separation model

    DEFF Research Database (Denmark)

    Lüthi, Beat; Berg, Jacob; Ott, Søren


    .g.; in the inertial range as epsilon−1/3r2/3. Particle separation is modeled as a Gaussian process without invoking information of Eulerian acceleration statistics or of precise shapes of Eulerian velocity distribution functions. The time scale is a function of S2(r) and thus of the Lagrangian evolving separation......We present a new stochastic model for relative two-particle separation in turbulence. Inspired by material line stretching, we suggest that a similar process also occurs beyond the viscous range, with time scaling according to the longitudinal second-order structure function S2(r), e....... The model predictions agree with numerical and experimental results for various initial particle separations. We present model results for fixed time and fixed scale statistics. We find that for the Richardson-Obukhov law, i.e., =gepsilont3, to hold and to also be observed in experiments, high Reynolds...

  7. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    International Nuclear Information System (INIS)

    Ovacik, Meric A.; Androulakis, Ioannis P.


    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy

  8. DAMA and the self-similar infall halo model

    International Nuclear Information System (INIS)

    Natarajan, Aravind


    The annual modulation in the rate of weakly interacting massive particle (WIMP) recoils observed by the DAMA Collaboration at high significance is often analyzed in the context of an isothermal Maxwell-Boltzmann velocity distribution. While this is the simplest model, there is a need to consider other well motivated theories of halo formation. In this paper, we study a different halo model, that of self-similar infall which is characterized by the presence of a number of cold streams and caustics, not seen in simulations. It is shown that the self-similar infall model is consistent with the DAMA result both in amplitude and in phase, for WIMP masses exceeding ≅250 GeV at the 99.7% confidence level. Adding a small thermal component makes the parameter space near m χ =12 GeV consistent with the self-similar model. The minimum χ 2 per degree of freedom is found to be 0.92(1.03) with(without) channeling taken into account, indicating an acceptable fit. For WIMP masses much greater than the mass of the target nucleus, the recoil rate depends only on the ratio σ p /m χ which is found to be ≅0.06 femtobarn/TeV. However, as in the case of the isothermal halo, the allowed parameter space is inconsistent with the null result obtained by the CDMS and XENON experiments for spin-independent elastic scattering. Future experiments with directional sensitivity and mass bounds from accelerator experiments will help to distinguish between different halo models and/or constrain the contribution from cold flows.

  9. A canopy-type similarity model for wind farm optimization (United States)

    Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando


    The atmospheric boundary layer (ABL) flow through and over wind farms has been found to be similar to canopy-type flows, with characteristic flow development and shear penetration length scales (Markfort et al., 2012). Wind farms capture momentum from the ABL both at the leading edge and from above. We examine this further with an analytical canopy-type model. Within the flow development region, momentum is advected into the wind farm and wake turbulence draws excess momentum in from between turbines. This spatial heterogeneity of momentum within the wind farm is characterized by large dispersive momentum fluxes. Once the flow within the farm is developed, the area-averaged velocity profile exhibits a characteristic inflection point near the top of the wind farm, similar to that of canopy-type flows. The inflected velocity profile is associated with the presence of a dominant characteristic turbulence scale, which may be responsible for a significant portion of the vertical momentum flux. Prediction of this scale is useful for determining the amount of available power for harvesting. The new model is tested with results from wind tunnel experiments, which were conducted to characterize the turbulent flow in and above model wind farms in aligned and staggered configurations. The model is useful for representing wind farms in regional scale models, for the optimization of wind farms considering wind turbine spacing and layout configuration, and for assessing the impacts of upwind wind farms on nearby wind resources. Markfort CD, W Zhang and F Porté-Agel. 2012. Turbulent flow and scalar transport through and over aligned and staggered wind farms. Journal of Turbulence. 13(1) N33: 1-36. doi:10.1080/14685248.2012.709635.

  10. Vere-Jones' self-similar branching model

    International Nuclear Information System (INIS)

    Saichev, A.; Sornette, D.


    Motivated by its potential application to earthquake statistics as well as for its intrinsic interest in the theory of branching processes, we study the exactly self-similar branching process introduced recently by Vere-Jones. This model extends the ETAS class of conditional self-excited branching point-processes of triggered seismicity by removing the problematic need for a minimum (as well as maximum) earthquake size. To make the theory convergent without the need for the usual ultraviolet and infrared cutoffs, the distribution of magnitudes m ' of daughters of first-generation of a mother of magnitude m has two branches m ' ' >m with exponent β+d, where β and d are two positive parameters. We investigate the condition and nature of the subcritical, critical, and supercritical regime in this and in an extended version interpolating smoothly between several models. We predict that the distribution of magnitudes of events triggered by a mother of magnitude m over all generations has also two branches m ' ' >m with exponent β+h, with h=d√(1-s), where s is the fraction of triggered events. This corresponds to a renormalization of the exponent d into h by the hierarchy of successive generations of triggered events. For a significant part of the parameter space, the distribution of magnitudes over a full catalog summed over an average steady flow of spontaneous sources (immigrants) reproduces the distribution of the spontaneous sources with a single branch and is blind to the exponents β,d of the distribution of triggered events. Since the distribution of earthquake magnitudes is usually obtained with catalogs including many sequences, we conclude that the two branches of the distribution of aftershocks are not directly observable and the model is compatible with real seismic catalogs. In summary, the exactly self-similar Vere-Jones model provides an attractive new approach to model triggered seismicity, which alleviates delicate questions on the role of

  11. Face Recognition Performance Improvement using a Similarity Score of Feature Vectors based on Probabilistic Histograms

    Directory of Open Access Journals (Sweden)



    Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.

  12. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  13. Towards Modelling Variation in Music as Foundation for Similarity

    NARCIS (Netherlands)

    Volk, A.; de Haas, W.B.; van Kranenburg, P.; Cambouropoulos, E.; Tsougras, C.; Mavromatis, P.; Pastiadis, K.


    This paper investigates the concept of variation in music from the perspective of music similarity. Music similarity is a central concept in Music Information Retrieval (MIR), however there exists no comprehensive approach to music similarity yet. As a consequence, MIR faces the challenge on how to

  14. Structural similarity and descriptor spaces for clustering and development of QSAR models. (United States)

    Ruiz, Irene Luque; García, Gonzalo Cerruela; Gómez-Nieto, Miguel Angel


    In this paper we study and analyze the behavior of different representational spaces for the clustering and building of QSAR models. Representational spaces based on fingerprint similarity, structural similarity using maximum common subgraphs (MCS) and all maximum common subgraphs (AMCS) approaches are compared against representational spaces based on structural fragments and non-isomorphic fragments (NIF), built using different molecular descriptors. Algorithms for extraction of MCS, AMCS and NIF are described and support vector machine is used for the classification of a dataset corresponding with 74 compounds of 1,4-benzoquinone derivatives. Molecular descriptors are tested in order to build QSAR models for the prediction of the antifungal activity of the dataset. Descriptors based on the consideration of graph connectivity and distances are the most appropriate for building QSAR models. Moreover, models based on approximate similarity improve the statistical of the equations thanks to combining structural similarity, nonisomorphic fragments and descriptors approaches for the creation of more robust and finer prediction equations.

  15. Visual reconciliation of alternative similarity spaces in climate modeling (United States)

    J Poco; A Dasgupta; Y Wei; William Hargrove; C.R. Schwalm; D.N. Huntzinger; R Cook; E Bertini; C.T. Silva


    Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses...

  16. Self-similar solution for coupled thermal electromagnetic model ...

    African Journals Online (AJOL)

    An investigation into the existence and uniqueness solution of self-similar solution for the coupled Maxwell and Pennes Bio-heat equations have been done. Criteria for existence and uniqueness of self-similar solution are revealed in the consequent theorems. Journal of the Nigerian Association of Mathematical Physics ...

  17. A Model-Based Approach to Constructing Music Similarity Functions (United States)

    West, Kris; Lamere, Paul


    Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  18. A Model-Based Approach to Constructing Music Similarity Functions

    Directory of Open Access Journals (Sweden)

    Lamere Paul


    Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  19. Similarities between obesity in pets and children : the addiction model

    NARCIS (Netherlands)

    Pretlow, Robert A; Corbee, Ronald J


    Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest

  20. The predictive model on the user reaction time using the information similarity

    International Nuclear Information System (INIS)

    Lee, Sung Jin; Heo, Gyun Young; Chang, Soon Heung


    Human performance is frequently degraded because people forget. Memory is one of brain processes that are important when trying to understand how people process information. Although a large number of studies have been made on the human performance, little is known about the similarity effect in human performance. The purpose of this paper is to propose and validate the quantitative and predictive model on the human response time in the user interface with the concept of similarity. However, it is not easy to explain the human performance with only similarity or information amount. We are confronted by two difficulties: making the quantitative model on the human response time with the similarity and validating the proposed model by experimental work. We made the quantitative model based on the Hick's law and the law of practice. In addition, we validated the model with various experimental conditions by measuring participants' response time in the environment of computer-based display. Experimental results reveal that the human performance is improved by the user interface's similarity. We think that the proposed model is useful for the user interface design and evaluation phases

  1. An improved method for scoring protein-protein interactions using semantic similarity within the gene ontology

    Directory of Open Access Journals (Sweden)

    Jain Shobhit


    Full Text Available Abstract Background Semantic similarity measures are useful to assess the physiological relevance of protein-protein interactions (PPIs. They quantify similarity between proteins based on their function using annotation systems like the Gene Ontology (GO. Proteins that interact in the cell are likely to be in similar locations or involved in similar biological processes compared to proteins that do not interact. Thus the more semantically similar the gene function annotations are among the interacting proteins, more likely the interaction is physiologically relevant. However, most semantic similarity measures used for PPI confidence assessment do not consider the unequal depth of term hierarchies in different classes of cellular location, molecular function, and biological process ontologies of GO and thus may over-or under-estimate similarity. Results We describe an improved algorithm, Topological Clustering Semantic Similarity (TCSS, to compute semantic similarity between GO terms annotated to proteins in interaction datasets. Our algorithm, considers unequal depth of biological knowledge representation in different branches of the GO graph. The central idea is to divide the GO graph into sub-graphs and score PPIs higher if participating proteins belong to the same sub-graph as compared to if they belong to different sub-graphs. Conclusions The TCSS algorithm performs better than other semantic similarity measurement techniques that we evaluated in terms of their performance on distinguishing true from false protein interactions, and correlation with gene expression and protein families. We show an average improvement of 4.6 times the F1 score over Resnik, the next best method, on our Saccharomyces cerevisiae PPI dataset and 2 times on our Homo sapiens PPI dataset using cellular component, biological process and molecular function GO annotations.

  2. Differences and similarities in breast cancer risk assessment models in clinical practice : which model to choose?

    NARCIS (Netherlands)

    Jacobi, Catharina E.; de Bock, Geertruida H.; Siegerink, Bob; van Asperen, Christi J.

    To show differences and similarities between risk estimation models for breast cancer in healthy women from BRCA1/2-negative or untested families. After a systematic literature search seven models were selected: Gail-2, Claus Model, Claus Tables, BOADICEA, Jonker Model, Claus-Extended Formula, and

  3. Self-similar Gaussian processes for modeling anomalous diffusion (United States)

    Lim, S. C.; Muniandy, S. V.


    We study some Gaussian models for anomalous diffusion, which include the time-rescaled Brownian motion, two types of fractional Brownian motion, and models associated with fractional Brownian motion based on the generalized Langevin equation. Gaussian processes associated with these models satisfy the anomalous diffusion relation which requires the mean-square displacement to vary with tα, 0Brownian motion and time-rescaled Brownian motion all have the same probability distribution function, the Slepian theorem can be used to compare their first passage time distributions, which are different. Finally, in order to model anomalous diffusion with a variable exponent α(t) it is necessary to consider the multifractional extensions of these Gaussian processes.

  4. Similarity conditions for investigations of hydraulic-thermal tidal models

    International Nuclear Information System (INIS)

    Fluegge, G.; Schwarze, H.


    With the construction of nuclear power plants near German tidal estuaries in mind, investigations of mixing and spreading processes which occur during the discharge of heated cooling water in tidal waters were carried out in hydraulic-thermal tidal models of the Lower Weser and Lower Elbe by the Franzius Institute for hydraulic and coastal engineering of the Technical University Hannover. This contribution discusses in detail the problems met and the experience gained in constructing and operating these models. (orig./TK) [de

  5. Similarity and accuracy of mental models formed during nursing handovers: A concept mapping approach. (United States)

    Drach-Zahavy, Anat; Broyer, Chaya; Dagan, Efrat


    Shared mental models are crucial for constructing mutual understanding of the patient's condition during a clinical handover. Yet, scant research, if any, has empirically explored mental models of the parties involved in a clinical handover. This study aimed to examine the similarities among mental models of incoming and outgoing nurses, and to test their accuracy by comparing them with mental models of expert nurses. A cross-sectional study, exploring nurses' mental models via the concept mapping technique. 40 clinical handovers. Data were collected via concept mapping of the incoming, outgoing, and expert nurses' mental models (total of 120 concept maps). Similarity and accuracy for concepts and associations indexes were calculated to compare the different maps. About one fifth of the concepts emerged in both outgoing and incoming nurses' concept maps (concept similarity=23%±10.6). Concept accuracy indexes were 35%±18.8 for incoming and 62%±19.6 for outgoing nurses' maps. Although incoming nurses absorbed fewer number of concepts and associations (23% and 12%, respectively), they partially closed the gap (35% and 22%, respectively) relative to expert nurses' maps. The correlations between concept similarities, and incoming as well as outgoing nurses' concept accuracy, were significant (r=0.43, pmaps, outgoing nurses added information concerning the processes enacted during the shift, beyond the expert nurses' gold standard. Two seemingly contradicting processes in the handover were identified. "Information loss", captured by the low similarity indexes among the mental models of incoming and outgoing nurses; and "information restoration", based on accuracy measures indexes among the mental models of the incoming nurses. Based on mental model theory, we propose possible explanations for these processes and derive implications for how to improve a clinical handover. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Similarities between obesity in pets and children: the addiction model. (United States)

    Pretlow, Robert A; Corbee, Ronald J


    Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest short-term but poor long-term results. New strategies are needed. A novel theory posits that obesity in pets and children is due to 'treats' and excessive meal amounts given by the 'pet-parent' and child-parent to obtain affection from the pet/child, which enables 'eating addiction' in the pet/child and results in parental 'co-dependence'. Pet-parents and child-parents may even become hostage to the treats/food to avoid the ire of the pet/child. Eating addiction in the pet/child also may be brought about by emotional factors such as stress, independent of parental co-dependence. An applicable treatment for child obesity has been trialled using classic addiction withdrawal/abstinence techniques, as well as behavioural addiction methods, with significant results. Both the child and the parent progress through withdrawal from specific 'problem foods', next from snacking (non-specific foods) and finally from excessive portions at meals (gradual reductions). This approach should adapt well for pets and pet-parents. Pet obesity is more 'pure' than child obesity, in that contributing factors and treatment points are essentially under the control of the pet-parent. Pet obesity might thus serve as an ideal test bed for the treatment and prevention of child obesity, with focus primarily on parental behaviours. Sharing information between the fields of pet and child obesity would be mutually beneficial.

  7. Spatial enhancement of ECG using diagnostic similarity score based lead selective multi-scale linear model. (United States)

    Nallikuzhy, Jiss J; Dandapat, S


    In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Improvement of training set structure in fusion data cleaning using Time-Domain Global Similarity method

    International Nuclear Information System (INIS)

    Liu, J.; Lan, T.; Qin, H.


    Traditional data cleaning identifies dirty data by classifying original data sequences, which is a class-imbalanced problem since the proportion of incorrect data is much less than the proportion of correct ones for most diagnostic systems in Magnetic Confinement Fusion (MCF) devices. When using machine learning algorithms to classify diagnostic data based on class-imbalanced training set, most classifiers are biased towards the major class and show very poor classification rates on the minor class. By transforming the direct classification problem about original data sequences into a classification problem about the physical similarity between data sequences, the class-balanced effect of Time-Domain Global Similarity (TDGS) method on training set structure is investigated in this paper. Meanwhile, the impact of improved training set structure on data cleaning performance of TDGS method is demonstrated with an application example in EAST POlarimetry-INTerferometry (POINT) system.

  9. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review (United States)

    Sharafoddini, Anis; Dubin, Joel A


    data, wavelet transform and term frequency-inverse document frequency methods were employed to extract predictors. Selecting predictors with potential to highlight special cases and defining new patient similarity metrics were among the gaps identified in the existing literature that provide starting points for future work. Patient status prediction models based on patient similarity and health data offer exciting potential for personalizing and ultimately improving health care, leading to better patient outcomes. PMID:28258046

  10. Hang cleans and hang snatches produce similar improvements in female collegiate athletes (United States)

    Ayers, JL; DeBeliso, M; Sevene, TG


    Olympic weightlifting movements and their variations are believed to be among the most effective ways to improve power, strength, and speed in athletes. This study investigated the effects of two Olympic weightlifting variations (hang cleans and hang snatches), on power (vertical jump height), strength (1RM back squat), and speed (40-yard sprint) in female collegiate athletes. 23 NCAA Division I female athletes were randomly assigned to either a hang clean group or hang snatch group. Athletes participated in two workout sessions a week for six weeks, performing either hang cleans or hang snatches for five sets of three repetitions with a load of 80-85% 1RM, concurrent with their existing, season-specific, resistance training program. Vertical jump height, 1RM back squat, and 40-yard sprint all had a significant, positive improvement from pre-training to post-training in both groups (p≤0.01). However, when comparing the gain scores between groups, there was no significant difference between the hang clean and hang snatch groups for any of the three dependent variables (i.e., vertical jump height, p=0.46; 1RM back squat, p=0.20; and 40-yard sprint, p=0.46). Short-term training emphasizing hang cleans or hang snatches produced similar improvements in power, strength, and speed in female collegiate athletes. This provides strength and conditioning professionals with two viable programmatic options in athletic-based exercises to improve power, strength, and speed. PMID:27601779

  11. Hang cleans and hang snatches produce similar improvements in female collegiate athletes. (United States)

    Ayers, J L; DeBeliso, M; Sevene, T G; Adams, K J


    Olympic weightlifting movements and their variations are believed to be among the most effective ways to improve power, strength, and speed in athletes. This study investigated the effects of two Olympic weightlifting variations (hang cleans and hang snatches), on power (vertical jump height), strength (1RM back squat), and speed (40-yard sprint) in female collegiate athletes. 23 NCAA Division I female athletes were randomly assigned to either a hang clean group or hang snatch group. Athletes participated in two workout sessions a week for six weeks, performing either hang cleans or hang snatches for five sets of three repetitions with a load of 80-85% 1RM, concurrent with their existing, season-specific, resistance training program. Vertical jump height, 1RM back squat, and 40-yard sprint all had a significant, positive improvement from pre-training to post-training in both groups (p≤0.01). However, when comparing the gain scores between groups, there was no significant difference between the hang clean and hang snatch groups for any of the three dependent variables (i.e., vertical jump height, p=0.46; 1RM back squat, p=0.20; and 40-yard sprint, p=0.46). Short-term training emphasizing hang cleans or hang snatches produced similar improvements in power, strength, and speed in female collegiate athletes. This provides strength and conditioning professionals with two viable programmatic options in athletic-based exercises to improve power, strength, and speed.

  12. Improving the measurement of semantic similarity by combining gene ontology and co-functional network: a random walk based approach. (United States)

    Peng, Jiajie; Zhang, Xuanshuo; Hui, Weiwei; Lu, Junya; Li, Qianqian; Liu, Shuhui; Shang, Xuequn


    Gene Ontology (GO) is one of the most popular bioinformatics resources. In the past decade, Gene Ontology-based gene semantic similarity has been effectively used to model gene-to-gene interactions in multiple research areas. However, most existing semantic similarity approaches rely only on GO annotations and structure, or incorporate only local interactions in the co-functional network. This may lead to inaccurate GO-based similarity resulting from the incomplete GO topology structure and gene annotations. We present NETSIM2, a new network-based method that allows researchers to measure GO-based gene functional similarities by considering the global structure of the co-functional network with a random walk with restart (RWR)-based method, and by selecting the significant term pairs to decrease the noise information. Based on the EC number (Enzyme Commission)-based groups of yeast and Arabidopsis, evaluation test shows that NETSIM2 can enhance the accuracy of Gene Ontology-based gene functional similarity. Using NETSIM2 as an example, we found that the accuracy of semantic similarities can be significantly improved after effectively incorporating the global gene-to-gene interactions in the co-functional network, especially on the species that gene annotations in GO are far from complete.

  13. An improved DPSO with mutation based on similarity algorithm for optimization of transmission lines loading

    International Nuclear Information System (INIS)

    Shayeghi, H.; Mahdavi, M.; Bagheri, A.


    Static transmission network expansion planning (STNEP) problem acquires a principal role in power system planning and should be evaluated carefully. Up till now, various methods have been presented to solve the STNEP problem. But only in one of them, lines adequacy rate has been considered at the end of planning horizon and the problem has been optimized by discrete particle swarm optimization (DPSO). DPSO is a new population-based intelligence algorithm and exhibits good performance on solution of the large-scale, discrete and non-linear optimization problems like STNEP. However, during the running of the algorithm, the particles become more and more similar, and cluster into the best particle in the swarm, which make the swarm premature convergence around the local solution. In order to overcome these drawbacks and considering lines adequacy rate, in this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using an improved DPSO algorithm. The proposed improved DPSO is a new conception, collectivity, which is based on similarity between the particle and the current global best particle in the swarm that can prevent the premature convergence of DPSO around the local solution. The proposed method has been tested on the Garver's network and a real transmission network in Iran, and compared with the DPSO based method for solution of the TNEP problem. The results show that the proposed improved DPSO based method by preventing the premature convergence is caused that with almost the same expansion costs, the network adequacy is increased considerably. Also, regarding the convergence curves of both methods, it can be seen that precision of the proposed algorithm for the solution of the STNEP problem is more than DPSO approach.

  14. Improving the In-Medium Similarity Renormalization Group via approximate inclusion of three-body effects (United States)

    Morris, Titus; Bogner, Scott


    The In-Medium Similarity Renormalization Group (IM-SRG) has been applied successfully to the ground state of closed shell finite nuclei. Recent work has extended its ability to target excited states of these closed shell systems via equation of motion methods, and also complete spectra of the whole SD shell via effective shell model interactions. A recent alternative method for solving of the IM-SRG equations, based on the Magnus expansion, not only provides a computationally feasible route to producing observables, but also allows for approximate handling of induced three-body forces. Promising results for several systems, including finite nuclei, will be presented and discussed.

  15. Short-Term Power Forecasting Model for Photovoltaic Plants Based on Historical Similarity

    Directory of Open Access Journals (Sweden)

    M. Sonia Terreros-Olarte


    Full Text Available This paper proposes a new model for short-term forecasting of electric energy production in a photovoltaic (PV plant. The model is called HIstorical SImilar MIning (HISIMI model; its final structure is optimized by using a genetic algorithm, based on data mining techniques applied to historical cases composed by past forecasted values of weather variables, obtained from numerical tools for weather prediction, and by past production of electric power in a PV plant. The HISIMI model is able to supply spot values of power forecasts, and also the uncertainty, or probabilities, associated with those spot values, providing new useful information to users with respect to traditional forecasting models for PV plants. Such probabilities enable analysis and evaluation of risk associated with those spot forecasts, for example, in offers of energy sale for electricity markets. The results of spot forecasting of an illustrative example obtained with the HISIMI model for a real-life grid-connected PV plant, which shows high intra-hour variability of its actual power output, with forecasting horizons covering the following day, have improved those obtained with other two power spot forecasting models, which are a persistence model and an artificial neural network model.

  16. Promoting similarity of model sparsity structures in integrative analysis of cancer genetic data. (United States)

    Huang, Yuan; Liu, Jin; Yi, Huangdi; Shia, Ben-Chang; Ma, Shuangge


    In profiling studies, the analysis of a single dataset often leads to unsatisfactory results because of the small sample size. Multi-dataset analysis utilizes information of multiple independent datasets and outperforms single-dataset analysis. Among the available multi-dataset analysis methods, integrative analysis methods aggregate and analyze raw data and outperform meta-analysis methods, which analyze multiple datasets separately and then pool summary statistics. In this study, we conduct integrative analysis and marker selection under the heterogeneity structure, which allows different datasets to have overlapping but not necessarily identical sets of markers. Under certain scenarios, it is reasonable to expect some similarity of identified marker sets - or equivalently, similarity of model sparsity structures - across multiple datasets. However, the existing methods do not have a mechanism to explicitly promote such similarity. To tackle this problem, we develop a sparse boosting method. This method uses a BIC/HDBIC criterion to select weak learners in boosting and encourages sparsity. A new penalty is introduced to promote the similarity of model sparsity structures across datasets. The proposed method has a intuitive formulation and is broadly applicable and computationally affordable. In numerical studies, we analyze right censored survival data under the accelerated failure time model. Simulation shows that the proposed method outperforms alternative boosting and penalization methods with more accurate marker identification. The analysis of three breast cancer prognosis datasets shows that the proposed method can identify marker sets with increased similarity across datasets and improved prediction performance. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor

    Directory of Open Access Journals (Sweden)

    Ye Li


    Full Text Available Recommended system is beneficial to e-commerce sites, which provides customers with product information and recommendations; the recommendation system is currently widely used in many fields. In an era of information explosion, the key challenges of the recommender system is to obtain valid information from the tremendous amount of information and produce high quality recommendations. However, when facing the large mount of information, the traditional collaborative filtering algorithm usually obtains a high degree of sparseness, which ultimately lead to low accuracy recommendations. To tackle this issue, we propose a novel algorithm named Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor, which is based on the trust model and is combined with the user similarity. The novel algorithm takes into account the degree of interest overlap between the two users and results in a superior performance to the recommendation based on Trust Model in criteria of Precision, Recall, Diversity and Coverage. Additionally, the proposed model can effectively improve the efficiency of collaborative filtering algorithm and achieve high performance.

  18. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure. (United States)

    Zhang, Wen; Xiao, Fan; Li, Bin; Zhang, Siguang


    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods.

  19. MAC/FAC: A Model of Similarity-Based Retrieval. Technical Report #59. (United States)

    Forbus, Kenneth D.; And Others

    A model of similarity-based retrieval is presented that attempts to capture these seemingly contradictory psychological phenomena: (1) structural commonalities are weighed more heavily than surface commonalities in soundness or similarity judgments (when both members are present); (2) superficial similarity is more important in retrieval from…

  20. Embedding Term Similarity and Inverse Document Frequency into a Logical Model of Information Retrieval. (United States)

    Losada, David E.; Barreiro, Alvaro


    Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

  1. A new k-epsilon model consistent with Monin-Obukhov similarity theory

    DEFF Research Database (Denmark)

    van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.


    A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...

  2. Similarity of models of the observed navigational situation as multicriteria objects with probabilistic priorities

    Directory of Open Access Journals (Sweden)

    Popov Yu.A.


    Full Text Available The variant of calculation of relation of similarity of two models of navigational situation as multicriteria objects with probabilistic priorities has been considered. The priorities have been received with the help of the vessel system of observation

  3. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert


    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  4. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Yifei Chen


    Full Text Available Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  5. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection. (United States)

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing


    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  6. A study on the quantitative model of human response time using the amount and the similarity of information

    International Nuclear Information System (INIS)

    Lee, Sung Jin


    The mental capacity to retain or recall information, or memory is related to human performance during processing of information. Although a large number of studies have been carried out on human performance, little is known about the similarity effect. The purpose of this study was to propose and validate a quantitative and predictive model on human response time in the user interface with the basic concepts of information amount, similarity and degree of practice. It was difficult to explain human performance by only similarity or information amount. There were two difficulties: constructing a quantitative model on human response time and validating the proposed model by experimental work. A quantitative model based on the Hick's law, the law of practice and similarity theory was developed. The model was validated under various experimental conditions by measuring the participants' response time in the environment of a computer-based display. Human performance was improved by degree of similarity and practice in the user interface. Also we found the age-related human performance which was degraded as he or she was more elder. The proposed model may be useful for training operators who will handle some interfaces and predicting human performance by changing system design

  7. High and low contact frequency cardiac rehabilitation programmes elicit similar improvements in cardiorespiratory fitness and cardiovascular risk factors. (United States)

    LaHaye, Stephen A; Lacombe, Shawn P; Koppikar, Sahil; Lun, Grace; Parsons, Trisha L; Hopkins-Rosseel, Diana


    Cardiac rehabilitation (CR) is a proven intervention that substantially improves physical health and decreases death and disability following a cardiovascular event. Traditional CR typically involves 36 on-site exercise sessions spanning a 12-week period. To date, the optimal dose of CR has yet to be determined. This study compared a high contact frequency CR programme (HCF, 34 on-site sessions) with a low contact frequency CR programme (LCF, eight on-site sessions) of equal duration (4 months). A total of 961 low-risk cardiac patients (RARE score cardiovascular risk factors were measured on admission and discharge. Similar proportions of patients completed HCF (n = 346) and LCF (n = 351) (p = 0.398). Patients who were less fit (<8 METs) were more likely to drop out of the LCF group, while younger patients (<60 years) were more likely to drop out of the HCF group. Both groups experienced similar reductions in weight (-2.3 vs. -2.4 kg; p = 0.779) and improvements in cardiorespiratory fitness (+1.5 vs. +1.4 METs; p = 0.418). Patients in the LCF programme achieved equivalent results to those in the HCF programme. Certain subgroups of patients, however, may benefit from participation in a HCF programme, including those patients who are predisposed to prematurely discontinuing the programme and those patients who would benefit from increased monitoring. The LCF model can be employed as an alternative option to widen access and participation for patients who are unable to attend HCF programmes due to distance or time limitations. © The Author(s) 2013 Reprints and permissions:

  8. On Measuring Process Model Similarity based on High-level Change Operations

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas


    For various applications there is the need to compare the similarity between two process models. For example, given the as-is and to-be models of a particular business process, we would like to know how much they differ from each other and how we can efficiently transform the as-is to the to-be

  9. On Measuring Process Model Similarity Based on High-Level Change Operations

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas; Li, Qing


    For various applications there is the need to compare the similarity between two process models. For example, given the as-is and to-be models of a particular business process, we would like to know how much they differ from each other and how we can efficiently transform the as-is to the to-be

  10. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing. (United States)

    Leong, Siow Hoo; Ong, Seng Huat


    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  11. Detecting Local Residue Environment Similarity for Recognizing Near-Native Structure Models (United States)

    Kim, Hyungrae; Kihara, Daisuke


    We developed a new representation of local amino acid environments in protein structures called the Side-chain Depth Environment (SDE). An SDE defines a local structural environment of a residue considering the coordinates and the depth of amino acids that locate in the vicinity of the side-chain centroid of the residue. SDEs are general enough that similar SDEs are found in protein structures with globally different folds. Using SDEs, we developed a procedure called PRESCO (Protein Residue Environment SCOre) for selecting native or near-native models from a pool of computational models. The procedure searches similar residue environments observed in a query model against a set of representative native protein structures to quantify how native-like SDEs in the model are. When benchmarked on commonly used computational model datasets, our PRESCO compared favorably with the other existing scoring functions in selecting native and near-native models. PMID:25132526

  12. On two-layer models and the similarity functions for the PBL (United States)

    Brown, R. A.


    An operational Planetary Boundary Layer model which employs similarity principles and two-layer patching to provide state-of-the-art parameterization for the PBL flow is used to study the popularly used similarity functions, A and B. The expected trends with stratification are shown. The effects of baroclinicity, secondary flow, humidity, latitude, surface roughness variation and choice of characteristic height scale are discussed.

  13. Quality assessment of protein model-structures based on structural and functional similarities. (United States)

    Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata


    Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and

  14. Improving practical atmospheric dispersion models

    International Nuclear Information System (INIS)

    Hunt, J.C.R.; Hudson, B.; Thomson, D.J.


    The new generation of practical atmospheric dispersion model (for short range ≤ 30 km) are based on dispersion science and boundary layer meteorology which have widespread international acceptance. In addition, recent improvements in computer skills and the widespread availability of small powerful computers make it possible to have new regulatory models which are more complex than the previous generation which were based on charts and simple formulae. This paper describes the basis of these models and how they have developed. Such models are needed to satisfy the urgent public demand for sound, justifiable and consistent environmental decisions. For example, it is preferable that the same models are used to simulate dispersion in different industries; in many countries at present different models are used for emissions from nuclear and fossil fuel power stations. The models should not be so simple as to be suspect but neither should they be too complex for widespread use; for example, at public inquiries in Germany, where simple models are mandatory, it is becoming usual to cite the results from highly complex computational models because the simple models are not credible. This paper is written in a schematic style with an emphasis on tables and diagrams. (au) (22 refs.)

  15. Similarities and differences in gastrointestinal physiology between neonates and adults: a physiologically based pharmacokinetic modeling perspective. (United States)

    Yu, Guo; Zheng, Qing-Shan; Li, Guo-Fu


    Physiologically based pharmacokinetic (PBPK) modeling holds great promise for anticipating the quantitative changes of pharmacokinetics in pediatric populations relative to adults, which has served as a useful tool in regulatory reviews. Although the availability of specialized software for PBPK modeling has facilitated the widespread applications of this approach in regulatory submissions, challenges in the implementation and interpretation of pediatric PBPK models remain great, for which controversies and knowledge gaps remain regarding neonatal development of the gastrointestinal tract. The commentary highlights the similarities and differences in the gastrointestinal pH and transit time between neonates and adults from a PBPK modeling prospective. Understanding the similarities and differences in these physiological parameters governing oral absorption would promote good practice in the use of pediatric PBPK modeling to assess oral exposure and pharmacokinetics in neonates.

  16. LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation. (United States)

    Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl


    Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average

  17. Genome-Wide Expression Profiling of Five Mouse Models Identifies Similarities and Differences with Human Psoriasis (United States)

    Swindell, William R.; Johnston, Andrew; Carbajal, Steve; Han, Gangwen; Wohn, Christian; Lu, Jun; Xing, Xianying; Nair, Rajan P.; Voorhees, John J.; Elder, James T.; Wang, Xiao-Jing; Sano, Shigetoshi; Prens, Errol P.; DiGiovanni, John; Pittelkow, Mark R.; Ward, Nicole L.; Gudjonsson, Johann E.


    Development of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features of the human disease, and standardized validation criteria for psoriasis mouse models have not been widely applied. In this study, whole-genome transcriptional profiling is used to compare gene expression patterns manifested by human psoriatic skin lesions with those that occur in five psoriasis mouse models (K5-Tie2, imiquimod, K14-AREG, K5-Stat3C and K5-TGFbeta1). While the cutaneous gene expression profiles associated with each mouse phenotype exhibited statistically significant similarity to the expression profile of psoriasis in humans, each model displayed distinctive sets of similarities and differences in comparison to human psoriasis. For all five models, correspondence to the human disease was strong with respect to genes involved in epidermal development and keratinization. Immune and inflammation-associated gene expression, in contrast, was more variable between models as compared to the human disease. These findings support the value of all five models as research tools, each with identifiable areas of convergence to and divergence from the human disease. Additionally, the approach used in this paper provides an objective and quantitative method for evaluation of proposed mouse models of psoriasis, which can be strategically applied in future studies to score strengths of mouse phenotypes relative to specific aspects of human psoriasis. PMID:21483750

  18. A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure

    Directory of Open Access Journals (Sweden)

    Yang Zhou


    Full Text Available It is an important content to generate visual place cells (VPCs in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs’ generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs’ firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF and firing rate’s threshold (FRT.

  19. Spatiao – Temporal Evaluation and Comparison of MM5 Model using Similarity Algorithm

    Directory of Open Access Journals (Sweden)

    N. Siabi


    Full Text Available Introduction temporal and spatial change of meteorological and environmental variables is very important. These changes can be predicted by numerical prediction models over time and in different locations and can be provided as spatial zoning maps with interpolation methods such as geostatistics (16, 6. But these maps are comparable to each other as visual, qualitative and univariate for a limited number of maps (15. To resolve this problem the similarity algorithm is used. This algorithm is a simultaneous comparison method to a large number of data (18. Numerical prediction models such as MM5 were used in different studies (10, 22, and 23. But a little research is done to compare the spatio-temporal similarity of the models with real data quantitatively. The purpose of this paper is to integrate geostatistical techniques with similarity algorithm to study the spatial and temporal MM5 model predicted results with real data. Materials and Methods The study area is north east of Iran. 55 to 61 degrees of longitude and latitude is 30 to 38 degrees. Monthly and annual temperature and precipitation actual data for the period of 1990-2010 was received from the Meteorological Agency and Department of Energy. MM5 Model Data, with a spatial resolution 0.5 × 0.5 degree were downloaded from the NASA website (5. GS+ and ArcGis software were used to produce each variable map. We used multivariate methods co-kriging and kriging with an external drift by applying topography and height as a secondary variable via implementing Digital Elevation Model. (6,12,14. Then the standardize and similarity algorithms (9,11 was applied by programming in MATLAB software to each map grid point. The spatial and temporal similarities between data collections and model results were obtained by F values. These values are between 0 and 0.5 where the value below 0.2 indicates good similarity and above 0.5 shows very poor similarity. The results were plotted on maps by MATLAB

  20. Experimental Study of Dowel Bar Alternatives Based on Similarity Model Test

    Directory of Open Access Journals (Sweden)

    Chichun Hu


    Full Text Available In this study, a small-scaled accelerated loading test based on similarity theory and Accelerated Pavement Analyzer was developed to evaluate dowel bars with different materials and cross-sections. Jointed concrete specimen consisting of one dowel was designed as scaled model for the test, and each specimen was subjected to 864 thousand loading cycles. Deflections between jointed slabs were measured with dial indicators, and strains of the dowel bars were monitored with strain gauges. The load transfer efficiency, differential deflection, and dowel-concrete bearing stress for each case were calculated from these measurements. The test results indicated that the effect of the dowel modulus on load transfer efficiency can be characterized based on the similarity model test developed in the study. Moreover, round steel dowel was found to have similar performance to larger FRP dowel, and elliptical dowel can be preferentially considered in practice.

  1. Bianchi VI{sub 0} and III models: self-similar approach

    Energy Technology Data Exchange (ETDEWEB)

    Belinchon, Jose Antonio, E-mail: abelcal@ciccp.e [Departamento de Fisica, ETS Arquitectura, UPM, Av. Juan de Herrera 4, Madrid 28040 (Spain)


    We study several cosmological models with Bianchi VI{sub 0} and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and LAMBDA. As in other studied models we find that the behaviour of G and LAMBDA are related. If G behaves as a growing time function then LAMBDA is a positive decreasing time function but if G is decreasing then LAMBDA{sub 0} is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  2. Recent improvements in CRACUK modelling

    International Nuclear Information System (INIS)

    Egan, M.J.; Nixon, W.


    The CRACUK computer code is a revised version of the US consequence modelling code CRAC2, adapted to suit UK applications. This report describes in detail the modifications to the dosimetric models contained within the code, and assesses their influence upon the predicted consequences of postulated atmospheric releases following severe light water and fast reactor accidents. The impact for such source terms is not marked, when compared with existing uncertainty bands, although, for the fast reactor case, the distribution of predicted cancers among the different types is significantly affected. Nevertheless, the improvements lend confidence to the use of CRACUK for the assessment of accidents for a wider range of nuclear plant. (author)

  3. Numerical model of a non-steady atmospheric planetary boundary layer, based on similarity theory

    DEFF Research Database (Denmark)

    Zilitinkevich, S.S.; Fedorovich, E.E.; Shabalova, M.V.


    A numerical model of a non-stationary atmospheric planetary boundary layer (PBL) over a horizontally homogeneous flat surface is derived on the basis of similarity theory. The two most typical turbulence regimes are reproduced: one corresponding to a convectively growing PBL and another correspon...

  4. Environmental niche models for riverine desert fishes and their similarity according to phylogeny and functionality (United States)

    Whitney, James E.; Whittier, Joanna B.; Paukert, Craig


    Environmental filtering and competitive exclusion are hypotheses frequently invoked in explaining species' environmental niches (i.e., geographic distributions). A key assumption in both hypotheses is that the functional niche (i.e., species traits) governs the environmental niche, but few studies have rigorously evaluated this assumption. Furthermore, phylogeny could be associated with these hypotheses if it is predictive of functional niche similarity via phylogenetic signal or convergent evolution, or of environmental niche similarity through phylogenetic attraction or repulsion. The objectives of this study were to investigate relationships between environmental niches, functional niches, and phylogenies of fishes of the Upper (UCRB) and Lower (LCRB) Colorado River Basins of southwestern North America. We predicted that functionally similar species would have similar environmental niches (i.e., environmental filtering) and that closely related species would be functionally similar (i.e., phylogenetic signal) and possess similar environmental niches (i.e., phylogenetic attraction). Environmental niches were quantified using environmental niche modeling, and functional similarity was determined using functional trait data. Nonnatives in the UCRB provided the only support for environmental filtering, which resulted from several warmwater nonnatives having dam number as a common predictor of their distributions, whereas several cool- and coldwater nonnatives shared mean annual air temperature as an important distributional predictor. Phylogenetic signal was supported for both natives and nonnatives in both basins. Lastly, phylogenetic attraction was only supported for native fishes in the LCRB and for nonnative fishes in the UCRB. Our results indicated that functional similarity was heavily influenced by evolutionary history, but that phylogenetic relationships and functional traits may not always predict the environmental distribution of species. However, the

  5. Predictive modeling of human perception subjectivity: feasibility study of mammographic lesion similarity (United States)

    Xu, Songhua; Hudson, Kathleen; Bradley, Yong; Daley, Brian J.; Frederick-Dyer, Katherine; Tourassi, Georgia


    The majority of clinical content-based image retrieval (CBIR) studies disregard human perception subjectivity, aiming to duplicate the consensus expert assessment of the visual similarity on example cases. The purpose of our study is twofold: i) discern better the extent of human perception subjectivity when assessing the visual similarity of two images with similar semantic content, and (ii) explore the feasibility of personalized predictive modeling of visual similarity. We conducted a human observer study in which five observers of various expertise were shown ninety-nine triplets of mammographic masses with similar BI-RADS descriptors and were asked to select the two masses with the highest visual relevance. Pairwise agreement ranged between poor and fair among the five observers, as assessed by the kappa statistic. The observers' self-consistency rate was remarkably low, based on repeated questions where either the orientation or the presentation order of a mass was changed. Various machine learning algorithms were explored to determine whether they can predict each observer's personalized selection using textural features. Many algorithms performed with accuracy that exceeded each observer's self-consistency rate, as determined using a cross-validation scheme. This accuracy was statistically significantly higher than would be expected by chance alone (two-tailed p-value ranged between 0.001 and 0.01 for all five personalized models). The study confirmed that human perception subjectivity should be taken into account when developing CBIR-based medical applications.

  6. Improvements in ECN Wake Model

    Energy Technology Data Exchange (ETDEWEB)

    Versteeg, M.C. [University of Twente, Enschede (Netherlands); Ozdemir, H.; Brand, A.J. [ECN Wind Energy, Petten (Netherlands)


    Wind turbines extract energy from the flow field so that the flow in the wake of a wind turbine contains less energy and more turbulence than the undisturbed flow, leading to less energy extraction for the downstream turbines. In large wind farms, most turbines are located in the wake of one or more turbines causing the flow characteristics felt by these turbines differ considerably from the free stream flow conditions. The most important wake effect is generally considered to be the lower wind speed behind the turbine(s) since this decreases the energy production and as such the economical performance of a wind farm. The overall loss of a wind farm is very much dependent on the conditions and the lay-out of the farm but it can be in the order of 5-10%. Apart from the loss in energy production an additional wake effect is formed by the increase in turbulence intensity, which leads to higher fatigue loads. In this sense it becomes important to understand the details of wake behavior to improve and/or optimize a wind farm layout. Within this study improvements are presented for the existing ECN wake model which constructs the fundamental basis of ECN's FarmFlow wind farm wake simulation tool. The outline of this paper is as follows: first, the governing equations of the ECN wake farm model are presented. Then the near wake modeling is discussed and the results compared with the original near wake modeling and EWTW (ECN Wind Turbine Test Site Wieringermeer) data as well as the results obtained for various near wake implementation cases are shown. The details of the atmospheric stability model are given and the comparison with the solution obtained for the original surface layer model and with the available data obtained by EWTW measurements are presented. Finally the conclusions are summarized.

  7. An optimization model for improving highway safety

    Directory of Open Access Journals (Sweden)

    Promothes Saha


    Full Text Available This paper developed a traffic safety management system (TSMS for improving safety on county paved roads in Wyoming. TSMS is a strategic and systematic process to improve safety of roadway network. When funding is limited, it is important to identify the best combination of safety improvement projects to provide the most benefits to society in terms of crash reduction. The factors included in the proposed optimization model are annual safety budget, roadway inventory, roadway functional classification, historical crashes, safety improvement countermeasures, cost and crash reduction factors (CRFs associated with safety improvement countermeasures, and average daily traffics (ADTs. This paper demonstrated how the proposed model can identify the best combination of safety improvement projects to maximize the safety benefits in terms of reducing overall crash frequency. Although the proposed methodology was implemented on the county paved road network of Wyoming, it could be easily modified for potential implementation on the Wyoming state highway system. Other states can also benefit by implementing a similar program within their jurisdictions.

  8. Model-free aftershock forecasts constructed from similar sequences in the past (United States)

    van der Elst, N.; Page, M. T.


    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity

  9. Fast structure similarity searches among protein models: efficient clustering of protein fragments

    Directory of Open Access Journals (Sweden)

    Fogolari Federico


    Full Text Available Abstract Background For many predictive applications a large number of models is generated and later clustered in subsets based on structure similarity. In most clustering algorithms an all-vs-all root mean square deviation (RMSD comparison is performed. Most of the time is typically spent on comparison of non-similar structures. For sets with more than, say, 10,000 models this procedure is very time-consuming and alternative faster algorithms, restricting comparisons only to most similar structures would be useful. Results We exploit the inverse triangle inequality on the RMSD between two structures given the RMSDs with a third structure. The lower bound on RMSD may be used, when restricting the search of similarity to a reasonably low RMSD threshold value, to speed up similarity searches significantly. Tests are performed on large sets of decoys which are widely used as test cases for predictive methods, with a speed-up of up to 100 times with respect to all-vs-all comparison depending on the set and parameters used. Sample applications are shown. Conclusions The algorithm presented here allows fast comparison of large data sets of structures with limited memory requirements. As an example of application we present clustering of more than 100000 fragments of length 5 from the top500H dataset into few hundred representative fragments. A more realistic scenario is provided by the search of similarity within the very large decoy sets used for the tests. Other applications regard filtering nearly-indentical conformation in selected CASP9 datasets and clustering molecular dynamics snapshots. Availability A linux executable and a Perl script with examples are given in the supplementary material (Additional file 1. The source code is available upon request from the authors.

  10. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Directory of Open Access Journals (Sweden)

    Jin Dai


    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  11. The Similar Structure Method for Solving the Model of Fractal Dual-Porosity Reservoir

    Directory of Open Access Journals (Sweden)

    Li Xu


    Full Text Available This paper proposes a similar structure method (SSM to solve the boundary value problem of the extended modified Bessel equation. The method could efficiently solve a second-order linear homogeneous differential equation’s boundary value problem and obtain its solutions’ similar structure. A mathematics model is set up on the dual-porosity media, in which the influence of fractal dimension, spherical flow, wellbore storage, and skin factor is taken into cosideration. Researches in the model found that it was a special type of the extended modified Bessel equation in Laplace space. Then, the formation pressure and wellbore pressure under three types of outer boundaries (infinite, constant pressure, and closed are obtained via SSM in Laplace space. Combining SSM with the Stehfest algorithm, we propose the similar structure method algorithm (SSMA which can be used to calculate wellbore pressure and pressure derivative of reservoir seepage models clearly. Type curves of fractal dual-porosity spherical flow are plotted by SSMA. The presented algorithm promotes the development of well test analysis software.

  12. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen


    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  13. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.


    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  14. Video compressed sensing using iterative self-similarity modeling and residual reconstruction (United States)

    Kim, Yookyung; Oh, Han; Bilgin, Ali


    Compressed sensing (CS) has great potential for use in video data acquisition and storage because it makes it unnecessary to collect an enormous amount of data and to perform the computationally demanding compression process. We propose an effective CS algorithm for video that consists of two iterative stages. In the first stage, frames containing the dominant structure are estimated. These frames are obtained by thresholding the coefficients of similar blocks. In the second stage, refined residual frames are reconstructed from the original measurements and the measurements corresponding to the frames estimated in the first stage. These two stages are iterated until convergence. The proposed algorithm exhibits superior subjective image quality and significantly improves the peak-signal-to-noise ratio and the structural similarity index measure compared to other state-of-the-art CS algorithms.

  15. State impulsive control strategies for a two-languages competitive model with bilingualism and interlinguistic similarity (United States)

    Nie, Lin-Fei; Teng, Zhi-Dong; Nieto, Juan J.; Jung, Il Hyo


    For reasons of preserving endangered languages, we propose, in this paper, a novel two-languages competitive model with bilingualism and interlinguistic similarity, where state-dependent impulsive control strategies are introduced. The novel control model includes two control threshold values, which are different from the previous state-dependent impulsive differential equations. By using qualitative analysis method, we obtain that the control model exhibits two stable positive order-1 periodic solutions under some general conditions. Moreover, numerical simulations clearly illustrate the main theoretical results and feasibility of state-dependent impulsive control strategies. Meanwhile numerical simulations also show that state-dependent impulsive control strategy can be applied to other general two-languages competitive model and obtain the desired result. The results indicate that the fractions of two competitive languages can be kept within a reasonable level under almost any circumstances. Theoretical basis for finding a new control measure to protect the endangered language is offered.

  16. Towards predictive resistance models for agrochemicals by combining chemical and protein similarity via proteochemometric modelling. (United States)

    van Westen, Gerard J P; Bender, Andreas; Overington, John P


    Resistance to pesticides is an increasing problem in agriculture. Despite practices such as phased use and cycling of 'orthogonally resistant' agents, resistance remains a major risk to national and global food security. To combat this problem, there is a need for both new approaches for pesticide design, as well as for novel chemical entities themselves. As summarized in this opinion article, a technique termed 'proteochemometric modelling' (PCM), from the field of chemoinformatics, could aid in the quantification and prediction of resistance that acts via point mutations in the target proteins of an agent. The technique combines information from both the chemical and biological domain to generate bioactivity models across large numbers of ligands as well as protein targets. PCM has previously been validated in prospective, experimental work in the medicinal chemistry area, and it draws on the growing amount of bioactivity information available in the public domain. Here, two potential applications of proteochemometric modelling to agrochemical data are described, based on previously published examples from the medicinal chemistry literature.

  17. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico


    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  18. Generalization of landslide susceptibility models in geologic-geomorphologic similar context (United States)

    Piedade, Aldina; Zêzere, José Luis; António Tenedório, José; Garcia, Ricardo A. C.; Oliveira, Sérgio C.; Rocha, Jorge


    The region north of Lisbon, which is known by several forms of slope instability, is the study area of this study. Two sample areas were chosen having similar geological and geomorphological conditions to assess susceptibility regarding shallow translational slides occurrence. Landslide susceptibility was assessed using a bivariate statistical method (Information Value Method) and the developed methodology focuses on the exportation of susceptibility scores obtained in a sample area (modelling area of Fanhões-Trancão) to other area (validation area of Lousa-Loures) having similar geological and geomorphological features. The rationale is that similar environments should have identical landslide susceptibility, i.e., the same causes are likely to generate the same effects. Thus, scores of Information Value obtained in the modelling area of Fanhões-Trancão (20 km2) are used to evaluate the susceptibility in the validation area of Lousa-Loures (17 km2). The susceptibility scores were obtained for the modelling area by crossing the landslide layer (the dependent variable) with a set of 7 classified predisposing factors for slope instability (assumed as independent variables): slope, aspect, transverse slope profile, lithology, geomorphology, superficial deposits and land use. The same set of landslide predisposing factors was prepared for the validation area and we use the same criteria to define classes within each theme. Field work and aerial-photo interpretation were performed in the validation area and a landslide database was constructed and subsequently used to validate the landslide susceptibility model. In addition, new scores of Information Value were calculated for the validation area by crossing existing shallow translational slides with the predisposing factors of slope instability. Validation of predictive models is carried out by comparison of success-rate and prediction-rate curves. Furthermore, sensitivity analysis of the variables is performed in

  19. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation (United States)

    Quan, Lulin; Yang, Zhixin


    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  20. Self-similar measures in multi-sector endogenous growth models

    International Nuclear Information System (INIS)

    La Torre, Davide; Marsiglio, Simone; Mendivil, Franklin; Privileggi, Fabio


    We analyze two types of stochastic discrete time multi-sector endogenous growth models, namely a basic Uzawa–Lucas (1965, 1988) model and an extended three-sector version as in La Torre and Marsiglio (2010). As in the case of sustained growth the optimal dynamics of the state variables are not stationary, we focus on the dynamics of the capital ratio variables, and we show that, through appropriate log-transformations, they can be converted into affine iterated function systems converging to an invariant distribution supported on some (possibly fractal) compact set. This proves that also the steady state of endogenous growth models—i.e., the stochastic balanced growth path equilibrium—might have a fractal nature. We also provide some sufficient conditions under which the associated self-similar measures turn out to be either singular or absolutely continuous (for the three-sector model we only consider the singularity).

  1. Initial virtual flight test for a dynamically similar aircraft model with control augmentation system

    Directory of Open Access Journals (Sweden)

    Linliang Guo


    Full Text Available To satisfy the validation requirements of flight control law for advanced aircraft, a wind tunnel based virtual flight testing has been implemented in a low speed wind tunnel. A 3-degree-of-freedom gimbal, ventrally installed in the model, was used in conjunction with an actively controlled dynamically similar model of aircraft, which was equipped with the inertial measurement unit, attitude and heading reference system, embedded computer and servo-actuators. The model, which could be rotated around its center of gravity freely by the aerodynamic moments, together with the flow field, operator and real time control system made up the closed-loop testing circuit. The model is statically unstable in longitudinal direction, and it can fly stably in wind tunnel with the function of control augmentation of the flight control laws. The experimental results indicate that the model responds well to the operator’s instructions. The response of the model in the tests shows reasonable agreement with the simulation results. The difference of response of angle of attack is less than 0.5°. The effect of stability augmentation and attitude control law was validated in the test, meanwhile the feasibility of virtual flight test technique treated as preliminary evaluation tool for advanced flight vehicle configuration research was also verified.

  2. Assessing intrinsic and specific vulnerability models ability to indicate groundwater vulnerability to groups of similar pesticides: A comparative study (United States)

    Douglas, Steven; Dixon, Barnali; Griffin, Dale W.


    With continued population growth and increasing use of fresh groundwater resources, protection of this valuable resource is critical. A cost effective means to assess risk of groundwater contamination potential will provide a useful tool to protect these resources. Integrating geospatial methods offers a means to quantify the risk of contaminant potential in cost effective and spatially explicit ways. This research was designed to compare the ability of intrinsic (DRASTIC) and specific (Attenuation Factor; AF) vulnerability models to indicate groundwater vulnerability areas by comparing model results to the presence of pesticides from groundwater sample datasets. A logistic regression was used to assess the relationship between the environmental variables and the presence or absence of pesticides within regions of varying vulnerability. According to the DRASTIC model, more than 20% of the study area is very highly vulnerable. Approximately 30% is very highly vulnerable according to the AF model. When groundwater concentrations of individual pesticides were compared to model predictions, the results were mixed. Model predictability improved when concentrations of the group of similar pesticides were compared to model results. Compared to the DRASTIC model, the AF model more accurately predicts the distribution of the number of contaminated wells within each vulnerability class.

  3. A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities (United States)

    Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.


    The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.

  4. Modeling of locally self-similar processes using multifractional Brownian motion of Riemann-Liouville type (United States)

    Muniandy, S. V.; Lim, S. C.


    Fractional Brownian motion (FBM) is widely used in the modeling of phenomena with power spectral density of power-law type. However, FBM has its limitation since it can only describe phenomena with monofractal structure or a uniform degree of irregularity characterized by the constant Holder exponent. For more realistic modeling, it is necessary to take into consideration the local variation of irregularity, with the Holder exponent allowed to vary with time (or space). One way to achieve such a generalization is to extend the standard FBM to multifractional Brownian motion (MBM) indexed by a Holder exponent that is a function of time. This paper proposes an alternative generalization to MBM based on the FBM defined by the Riemann-Liouville type of fractional integral. The local properties of the Riemann-Liouville MBM (RLMBM) are studied and they are found to be similar to that of the standard MBM. A numerical scheme to simulate the locally self-similar sample paths of the RLMBM for various types of time-varying Holder exponents is given. The local scaling exponents are estimated based on the local growth of the variance and the wavelet scalogram methods. Finally, an example of the possible applications of RLMBM in the modeling of multifractal time series is illustrated.

  5. Vertex labeling and routing in self-similar outerplanar unclustered graphs modeling complex networks

    International Nuclear Information System (INIS)

    Comellas, Francesc; Miralles, Alicia


    This paper introduces a labeling and optimal routing algorithm for a family of modular, self-similar, small-world graphs with clustering zero. Many properties of this family are comparable to those of networks associated with technological and biological systems with low clustering, such as the power grid, some electronic circuits and protein networks. For these systems, the existence of models with an efficient routing protocol is of interest to design practical communication algorithms in relation to dynamical processes (including synchronization) and also to understand the underlying mechanisms that have shaped their particular structure.

  6. Self-similarities of periodic structures for a discrete model of a two-gene system

    Energy Technology Data Exchange (ETDEWEB)

    Souza, S.L.T. de, E-mail: [Departamento de Física e Matemática, Universidade Federal de São João del-Rei, Ouro Branco, MG (Brazil); Lima, A.A. [Escola de Farmácia, Universidade Federal de Ouro Preto, Ouro Preto, MG (Brazil); Caldas, I.L. [Instituto de Física, Universidade de São Paulo, São Paulo, SP (Brazil); Medrano-T, R.O. [Departamento de Ciências Exatas e da Terra, Universidade Federal de São Paulo, Diadema, SP (Brazil); Guimarães-Filho, Z.O. [Aix-Marseille Univ., CNRS PIIM UMR6633, International Institute for Fusion Science, Marseille (France)


    We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.

  7. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li


    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  8. Improved transition models for cepstral trajectories

    CSIR Research Space (South Africa)

    Badenhorst, J


    Full Text Available is ideal for the investigation of contextual effects on cepstral trajectories. We show that modelling improvements, such as continuity constraints on parameter values and more flexible transition models, systematically improve the robustness of our...

  9. Feasibility of similarity coefficient map for improving morphological evaluation of T2* weighted MRI for renal cancer

    International Nuclear Information System (INIS)

    Wang Hao-Yu; Bao Shang-Lian; Jiani Hu; Meng Li; Haacke, E. M.; Xie Yao-Qin; Chen Jie; Amy Yu; Wei Xin-Hua; Dai Yong-Ming


    The purpose of this paper is to investigate the feasibility of using a similarity coefficient map (SCM) in improving the morphological evaluation of T 2 * weighted (T 2 *W) magnatic resonance imaging (MRI) for renal cancer. Simulation studies and in vivo 12-echo T 2 *W experiments for renal cancers were performed for this purpose. The results of the first simulation study suggest that an SCM can reveal small structures which are hard to distinguish from the background tissue in T 2 *W images and the corresponding T 2 * map. The capability of improving the morphological evaluation is likely due to the improvement in the signal-to-noise ratio (SNR) and the carrier-to-noise ratio (CNR) by using the SCM technique. Compared with T 2 *W images, an SCM can improve the SNR by a factor ranging from 1.87 to 2.47. Compared with T 2 * maps, an SCM can improve the SNR by a factor ranging from 3.85 to 33.31. Compared with T 2 *W images, an SCM can improve the CNR by a factor ranging from 2.09 to 2.43. Compared with T 2 * maps, an SCM can improve the CNR by a factor ranging from 1.94 to 8.14. For a given noise level, the improvements of the SNR and the CNR depend mainly on the original SNRs and CNRs in T 2 *W images, respectively. In vivo experiments confirmed the results of the first simulation study. The results of the second simulation study suggest that more echoes are used to generate the SCM, and higher SNRs and CNRs can be achieved in SCMs. In conclusion, an SCM can provide improved morphological evaluation of T 2 *W MR images for renal cancer by unveiling fine structures which are ambiguous or invisible in the corresponding T 2 *W MR images and T 2 * maps. Furthermore, in practical applications, for a fixed total sampling time, one should increase the number of echoes as much as possible to achieve SCMs with better SNRs and CNRs

  10. An analytic solution of a model of language competition with bilingualism and interlinguistic similarity (United States)

    Otero-Espinar, M. V.; Seoane, L. F.; Nieto, J. J.; Mira, J.


    An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also suggest ways to further improve the model and facilitate the comparison of its consequences with those from other approaches or with real data.

  11. The synaptonemal complex of basal metazoan hydra: more similarities to vertebrate than invertebrate meiosis model organisms. (United States)

    Fraune, Johanna; Wiesner, Miriam; Benavente, Ricardo


    The synaptonemal complex (SC) is an evolutionarily well-conserved structure that mediates chromosome synapsis during prophase of the first meiotic division. Although its structure is conserved, the characterized protein components in the current metazoan meiosis model systems (Drosophila melanogaster, Caenorhabditis elegans, and Mus musculus) show no sequence homology, challenging the question of a single evolutionary origin of the SC. However, our recent studies revealed the monophyletic origin of the mammalian SC protein components. Many of them being ancient in Metazoa and already present in the cnidarian Hydra. Remarkably, a comparison between different model systems disclosed a great similarity between the SC components of Hydra and mammals while the proteins of the ecdysozoan systems (D. melanogaster and C. elegans) differ significantly. In this review, we introduce the basal-branching metazoan species Hydra as a potential novel invertebrate model system for meiosis research and particularly for the investigation of SC evolution, function and assembly. Also, available methods for SC research in Hydra are summarized. Copyright © 2014. Published by Elsevier Ltd.

  12. Stereotype content model across cultures: Towards universal similarities and some differences (United States)

    Cuddy, Amy J. C.; Fiske, Susan T.; Kwan, Virginia S. Y.; Glick, Peter; Demoulin, Stéphanie; Leyens, Jacques-Philippe; Bond, Michael Harris; Croizet, Jean-Claude; Ellemers, Naomi; Sleebos, Ed; Htun, Tin Tin; Kim, Hyun-Jeong; Maio, Greg; Perry, Judi; Petkova, Kristina; Todorov, Valery; Rodríguez-Bailón, Rosa; Morales, Elena; Moya, Miguel; Palacios, Marisol; Smith, Vanessa; Perez, Rolando; Vala, Jorge; Ziegler, Rene


    The stereotype content model (SCM) proposes potentially universal principles of societal stereotypes and their relation to social structure. Here, the SCM reveals theoretically grounded, cross-cultural, cross-groups similarities and one difference across 10 non-US nations. Seven European (individualist) and three East Asian (collectivist) nations (N = 1, 028) support three hypothesized cross-cultural similarities: (a) perceived warmth and competence reliably differentiate societal group stereotypes; (b) many out-groups receive ambivalent stereotypes (high on one dimension; low on the other); and (c) high status groups stereotypically are competent, whereas competitive groups stereotypically lack warmth. Data uncover one consequential cross-cultural difference: (d) the more collectivist cultures do not locate reference groups (in-groups and societal prototype groups) in the most positive cluster (high-competence/high-warmth), unlike individualist cultures. This demonstrates out-group derogation without obvious reference-group favouritism. The SCM can serve as a pancultural tool for predicting group stereotypes from structural relations with other groups in society, and comparing across societies. PMID:19178758

  13. Stereotype content model across cultures: towards universal similarities and some differences. (United States)

    Cuddy, Amy J C; Fiske, Susan T; Kwan, Virginia S Y; Glick, Peter; Demoulin, Stéphanie; Leyens, Jacques-Philippe; Bond, Michael Harris; Croizet, Jean-Claude; Ellemers, Naomi; Sleebos, Ed; Htun, Tin Tin; Kim, Hyun-Jeong; Maio, Greg; Perry, Judi; Petkova, Kristina; Todorov, Valery; Rodríguez-Bailón, Rosa; Morales, Elena; Moya, Miguel; Palacios, Marisol; Smith, Vanessa; Perez, Rolando; Vala, Jorge; Ziegler, Rene


    The stereotype content model (SCM) proposes potentially universal principles of societal stereotypes and their relation to social structure. Here, the SCM reveals theoretically grounded, cross-cultural, cross-groups similarities and one difference across 10 non-US nations. Seven European (individualist) and three East Asian (collectivist) nations (N=1,028) support three hypothesized cross-cultural similarities: (a) perceived warmth and competence reliably differentiate societal group stereotypes; (b) many out-groups receive ambivalent stereotypes (high on one dimension; low on the other); and (c) high status groups stereotypically are competent, whereas competitive groups stereotypically lack warmth. Data uncover one consequential cross-cultural difference: (d) the more collectivist cultures do not locate reference groups (in-groups and societal prototype groups) in the most positive cluster (high-competence/high-warmth), unlike individualist cultures. This demonstrates out-group derogation without obvious reference-group favouritism. The SCM can serve as a pancultural tool for predicting group stereotypes from structural relations with other groups in society, and comparing across societies.

  14. Improving the measurement of semantic similarity between gene ontology terms and gene products: insights from an edge- and IC-based hybrid method.

    Directory of Open Access Journals (Sweden)

    Xiaomei Wu

    Full Text Available BACKGROUND: Explicit comparisons based on the semantic similarity of Gene Ontology terms provide a quantitative way to measure the functional similarity between gene products and are widely applied in large-scale genomic research via integration with other models. Previously, we presented an edge-based method, Relative Specificity Similarity (RSS, which takes the global position of relevant terms into account. However, edge-based semantic similarity metrics are sensitive to the intrinsic structure of GO and simply consider terms at the same level in the ontology to be equally specific nodes, revealing the weaknesses that could be complemented using information content (IC. RESULTS AND CONCLUSIONS: Here, we used the IC-based nodes to improve RSS and proposed a new method, Hybrid Relative Specificity Similarity (HRSS. HRSS outperformed other methods in distinguishing true protein-protein interactions from false. HRSS values were divided into four different levels of confidence for protein interactions. In addition, HRSS was statistically the best at obtaining the highest average functional similarity among human-mouse orthologs. Both HRSS and the groupwise measure, simGIC, are superior in correlation with sequence and Pfam similarities. Because different measures are best suited for different circumstances, we compared two pairwise strategies, the maximum and the best-match average, in the evaluation. The former was more effective at inferring physical protein-protein interactions, and the latter at estimating the functional conservation of orthologs and analyzing the CESSM datasets. In conclusion, HRSS can be applied to different biological problems by quantifying the functional similarity between gene products. The algorithm HRSS was implemented in the C programming language, which is freely available from

  15. Improving disease gene prioritization by comparing the semantic similarity of phenotypes in mice with those of human diseases.

    Directory of Open Access Journals (Sweden)

    Anika Oellrich

    Full Text Available Despite considerable progress in understanding the molecular origins of hereditary human diseases, the molecular basis of several thousand genetic diseases still remains unknown. High-throughput phenotype studies are underway to systematically assess the phenotype outcome of targeted mutations in model organisms. Thus, comparing the similarity between experimentally identified phenotypes and the phenotypes associated with human diseases can be used to suggest causal genes underlying a disease. In this manuscript, we present a method for disease gene prioritization based on comparing phenotypes of mouse models with those of human diseases. For this purpose, either human disease phenotypes are "translated" into a mouse-based representation (using the Mammalian Phenotype Ontology, or mouse phenotypes are "translated" into a human-based representation (using the Human Phenotype Ontology. We apply a measure of semantic similarity and rank experimentally identified phenotypes in mice with respect to their phenotypic similarity to human diseases. Our method is evaluated on manually curated and experimentally verified gene-disease associations for human and for mouse. We evaluate our approach using a Receiver Operating Characteristic (ROC analysis and obtain an area under the ROC curve of up to . Furthermore, we are able to confirm previous results that the Vax1 gene is involved in Septo-Optic Dysplasia and suggest Gdf6 and Marcks as further potential candidates. Our method significantly outperforms previous phenotype-based approaches of prioritizing gene-disease associations. To enable the adaption of our method to the analysis of other phenotype data, our software and prioritization results are freely available under a BSD licence at Furthermore, our method has been integrated in PhenomeNET and the results can be explored using the PhenomeBrowser at

  16. Similar Biophysical Abnormalities in Glomeruli and Podocytes from Two Distinct Models. (United States)

    Embry, Addie E; Liu, Zhenan; Henderson, Joel M; Byfield, F Jefferson; Liu, Liping; Yoon, Joonho; Wu, Zhenzhen; Cruz, Katrina; Moradi, Sara; Gillombardo, C Barton; Hussain, Rihanna Z; Doelger, Richard; Stuve, Olaf; Chang, Audrey N; Janmey, Paul A; Bruggeman, Leslie A; Miller, R Tyler


    Background FSGS is a pattern of podocyte injury that leads to loss of glomerular function. Podocytes support other podocytes and glomerular capillary structure, oppose hemodynamic forces, form the slit diaphragm, and have mechanical properties that permit these functions. However, the biophysical characteristics of glomeruli and podocytes in disease remain unclear. Methods Using microindentation, atomic force microscopy, immunofluorescence microscopy, quantitative RT-PCR, and a three-dimensional collagen gel contraction assay, we studied the biophysical and structural properties of glomeruli and podocytes in chronic (Tg26 mice [HIV protein expression]) and acute (protamine administration [cytoskeletal rearrangement]) models of podocyte injury. Results Compared with wild-type glomeruli, Tg26 glomeruli became progressively more deformable with disease progression, despite increased collagen content. Tg26 podocytes had disordered cytoskeletons, markedly abnormal focal adhesions, and weaker adhesion; they failed to respond to mechanical signals and exerted minimal traction force in three-dimensional collagen gels. Protamine treatment had similar but milder effects on glomeruli and podocytes. Conclusions Reduced structural integrity of Tg26 podocytes causes increased deformability of glomerular capillaries and limits the ability of capillaries to counter hemodynamic force, possibly leading to further podocyte injury. Loss of normal podocyte mechanical integrity could injure neighboring podocytes due to the absence of normal biophysical signals required for podocyte maintenance. The severe defects in podocyte mechanical behavior in the Tg26 model may explain why Tg26 glomeruli soften progressively, despite increased collagen deposition, and may be the basis for the rapid course of glomerular diseases associated with severe podocyte injury. In milder injury (protamine), similar processes occur but over a longer time. Copyright © 2018 by the American Society of Nephrology.

  17. Measuring similarity and improving stability in biomarker identification methods applied to Fourier-transform infrared (FTIR) spectroscopy. (United States)

    Trevisan, Júlio; Park, Juhyun; Angelov, Plamen P; Ahmadzai, Abdullah A; Gajjar, Ketan; Scott, Andrew D; Carmichael, Paul L; Martin, Francis L


    FTIR spectroscopy is a powerful diagnostic tool that can also derive biochemical signatures of a wide range of cellular materials, such as cytology, histology, live cells, and biofluids. However, while classification is a well-established subject, biomarker identification lacks standards and validation of its methods. Validation of biomarker identification methods is difficult because, unlike classification, there is usually no reference biomarker against which to test the biomarkers extracted by a method. In this paper, we propose a framework to assess and improve the stability of biomarkers derived by a method, and to compare biomarkers derived by different method set-ups and between different methods by means of a proposed "biomarkers similarity index". Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Low- and high-volume strength training induces similar neuromuscular improvements in muscle quality in elderly women. (United States)

    Radaelli, Regis; Botton, Cíntia E; Wilhelm, Eurico N; Bottaro, Martim; Lacerda, Fabiano; Gaya, Anelise; Moraes, Kelly; Peruzzolo, Amanda; Brown, Lee E; Pinto, Ronei Silveira


    The aim of this study was to compare the effects of low- and high-volume strength training on strength, muscle activation and muscle thickness (MT) of the lower- and upper-body, and on muscle quality (MQ) of the lower-body in older women. Twenty apparently healthy elderly women were randomly assigned into two groups: low-volume (LV, n=11) and high-volume (HV, n=9). The LV group performed one-set of each exercise, while the HV group performed three-sets of each exercise, twice weekly for 13 weeks. MQ was measured by echo intensity obtained by ultrasonography (MQEI), strength per unit of muscle mass (MQST), and strength per unit of muscle mass adjusted with an allometric scale (MQAS). Following training, there was a significant increase (p≤0.001) in knee extension 1-RM (31.8±20.5% for LV and 38.3±7.3% for HV) and in elbow flexion 1-RM (25.1±9.5% for LV and 26.6±8.9% for HV) and in isometric maximal strength of the lower-body (p≤0.05) and upper-body (p≤0.001), with no difference between groups. The maximal electromyographic activation for both groups increased significantly (p≤0.05) in the vastus medialis and biceps brachii, with no difference between groups. All MT measurements of the lower- and upper-body increased similarly in both groups (p≤0.001). Similar improvements were also observed in MQEI (p≤0.01), MQST, and MQAS (p≤0.001) for both groups. These results demonstrate that low- and high-volume strength training promote similar increases in neuromuscular adaptations of the lower- and upper-body, and in MQ of the lower-body in elderly women. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.

    Directory of Open Access Journals (Sweden)

    Octavio Miramontes

    Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.

  20. More Similar than Different? Exploring Cultural Models of Depression among Latino Immigrants in Florida

    Directory of Open Access Journals (Sweden)

    Dinorah (Dina Martinez Tyson


    Full Text Available The Surgeon General's report, “Culture, Race, and Ethnicity: A Supplement to Mental Health,” points to the need for subgroup specific mental health research that explores the cultural variation and heterogeneity of the Latino population. Guided by cognitive anthropological theories of culture, we utilized ethnographic interviewing techniques to explore cultural models of depression among foreign-born Mexican (n=30, Cuban (n=30, Columbian (n=30, and island-born Puerto Ricans (n=30, who represent the largest Latino groups in Florida. Results indicate that Colombian, Cuban, Mexican, and Puerto Rican immigrants showed strong intragroup consensus in their models of depression causality, symptoms, and treatment. We found more agreement than disagreement among all four groups regarding core descriptions of depression, which was largely unexpected but can potentially be explained by their common immigrant experiences. Findings expand our understanding about Latino subgroup similarities and differences in their conceptualization of depression and can be used to inform the adaptation of culturally relevant interventions in order to better serve Latino immigrant communities.

  1. Propriedades termofísicas de soluções-modelo similares a sucos: parte II Thermophysical properties of model solutions similar to juice: part II

    Directory of Open Access Journals (Sweden)

    Sílvia Cristina Sobottka Rolim de Moura


    Full Text Available Propriedades termofísicas, densidade e viscosidade de soluções-modelo similares a sucos foram determinadas experimentalmente. Os resultados foram comparados aos preditos por modelos matemáticos (STATISTICA 6.0 e obtidos da literatura em função da sua composição química. Para definição das soluções-modelo, foi realizado um planejamento estrela, mantendo-se fixa a quanti-dade de ácido (1,5% e variando-se a água (82-98,5%, o carboidrato (0-15% e a gordura (0-1,5%. A densidade foi determinada em picnômetro. A viscosidade foi determinada em viscosímetro Brookfield modelo LVF. A condutividade térmica foi calculada com o conhecimento das propriedades difusividade térmica e calor específico (apresentados na Parte I deste trabalho MOURA [7] e da densidade. Os resultados de cada propriedade foram analisados através de superfícies de respostas. Foram encontrados resultados significativos para as propriedades, mostrando que os modelos encontrados representam as mudanças das propriedades térmicas e físicas dos sucos, com alterações na composição e na temperatura.Thermophysical properties, density and viscosity of model solutions similar to juices were experimentally determined. The results were compared to those predicted by mathematical models (STATISTIC 6.0 and to values mentioned in the literature, according to the chemical composition. A star planning was adopted to define model solutions composition; fixing the acid amount in 1.5% and varying water (82-98.5%, carbohydrate (0-15% and fat (0-1.5%. The density was determined by picnometer. The viscosity was determined by Brookfield LVF model viscosimeter. The thermal conductivity was calculated based on thermal diffusivity and specific heat values (presented at the 1st . Part of this paper - MOURA [7] and density. The results of each property were analyzed by the response surface method. The found results were significant, indicating that the models represent the changes of

  2. Modeling a Sensor to Improve Its Efficacy

    Directory of Open Access Journals (Sweden)

    Nabin K. Malakar


    Full Text Available Robots rely on sensors to provide them with information about their surroundings. However, high-quality sensors can be extremely expensive and cost-prohibitive. Thus many robotic systems must make due with lower-quality sensors. Here we demonstrate via a case study how modeling a sensor can improve its efficacy when employed within a Bayesian inferential framework. As a test bed we employ a robotic arm that is designed to autonomously take its own measurements using an inexpensive LEGO light sensor to estimate the position and radius of a white circle on a black field. The light sensor integrates the light arriving from a spatially distributed region within its field of view weighted by its spatial sensitivity function (SSF. We demonstrate that by incorporating an accurate model of the light sensor SSF into the likelihood function of a Bayesian inference engine, an autonomous system can make improved inferences about its surroundings. The method presented here is data based, fairly general, and made with plug-and-play in mind so that it could be implemented in similar problems.

  3. The Rigorous Model for Similarity Transformation under Intra-frame and Inter-frame Covariance

    Directory of Open Access Journals (Sweden)

    ZENG Anmin


    Full Text Available The coordinates are obtained from observations by using least-squares method, so their precision should be contaminated by observation errors and the covariance also exists between common points and non-common points. The coordinate errors don't only exist in the initial frame but also in the target frame. But the classical stepwise approach for coordinate frame transformation usually takes the coordinate errors of the initial frame into account and overlooks the stochastic correlation between common points and non-common points. A new rigorous unified model is proposed for coordinate frame transformation that takes into account both the errors of all coordinates in both fame and inter-frame coordinate covariance between common points and non-common points, and the corresponding estimator for the transformed coordinates are derived and involve appropriate corrections to the standard approach, in which the transformation parameters and the transformed coordinates for all points are computed in a single-step least squares approach. The inter frame coordinate covariance should be consistent to their uncertainties, but in practice their uncertainties are not consistent. To balance the covariance matrices of both frames, a new adaptive estimator for the unified model is thus derived in which the corresponding adaptive factor is constructed by the ratio computed by Helmert covariance component estimation, reasonable and consistent covariance matrices are arrived through the adjustment of the adaptive factor. Finally, an actual experiments with 2000 points from the crustal movement observation network of China (abbreviated CMONOC is carried out to verify the implement of the new model, the results show that the proposed model can significantly improve the precision of the coordinate transformation.

  4. An application of superpositions of two-state Markovian sources to the modelling of self-similar behaviour

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis


    We present a modelling framework and a fitting method for modelling second order self-similar behaviour with the Markovian arrival process (MAP). The fitting method is based on fitting to the autocorrelation function of counts a second order self-similar process. It is shown that with this fittin...... method seems to work well over the entire range of the Hurst (1951) parameter...

  5. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.


    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  6. Improved Trailing Edge Noise Model

    DEFF Research Database (Denmark)

    Bertagnolio, Franck


    The modeling of the surface pressure spectrum under a turbulent boundary layer is investigated in the presence of an adverse pressure gradient along the flow direction. It is shown that discrepancies between measurements and results from a well-known model increase as the pressure gradient increa...

  7. Improved model for statistical alignment

    Energy Technology Data Exchange (ETDEWEB)

    Miklos, I.; Toroczkai, Z. (Zoltan)


    The statistical approach to molecular sequence evolution involves the stochastic modeling of the substitution, insertion and deletion processes. Substitution has been modeled in a reliable way for more than three decades by using finite Markov-processes. Insertion and deletion, however, seem to be more difficult to model, and thc recent approaches cannot acceptably deal with multiple insertions and deletions. A new method based on a generating function approach is introduced to describe the multiple insertion process. The presented algorithm computes the approximate joint probability of two sequences in 0(13) running time where 1 is the geometric mean of the sequence lengths.

  8. Model-based software process improvement (United States)

    Zettervall, Brenda T.


    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  9. Pulmonary parenchyma segmentation in thin CT image sequences with spectral clustering and geodesic active contour model based on similarity (United States)

    He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan


    While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.

  10. Molecular Basis of LFER Modelling of Electronic Substituent Effect Using Fragment Quantum Self-Similarity Measures

    Czech Academy of Sciences Publication Activity Database

    Girónes, X.; Carbó-Dorca, R.; Ponec, Robert


    Roč. 43, č. 6 (2003), s. 2033-2039 ISSN 0095-2338 R&D Projects: GA MŠk OC D9.20 Institutional research plan: CEZ:AV0Z4072921 Keywords : hammett sigma constants * molecular similarity * fragment self-similarity measures Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.078, year: 2003

  11. A Unified Framework for Systematic Model Improvement

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay


    A unified framework for improving the quality of continuous time models of dynamic systems based on experimental data is presented. The framework is based on an interplay between stochastic differential equation (SDE) modelling, statistical tests and multivariate nonparametric regression...

  12. Olympic weightlifting and plyometric training with children provides similar or greater performance improvements than traditional resistance training. (United States)

    Chaouachi, Anis; Hammami, Raouf; Kaabi, Sofiene; Chamari, Karim; Drinkwater, Eric J; Behm, David G


    A number of organizations recommend that advanced resistance training (RT) techniques can be implemented with children. The objective of this study was to evaluate the effectiveness of Olympic-style weightlifting (OWL), plyometrics, and traditional RT programs with children. Sixty-three children (10-12 years) were randomly allocated to a 12-week control OWL, plyometric, or traditional RT program. Pre- and post-training tests included body mass index (BMI), sum of skinfolds, countermovement jump (CMJ), horizontal jump, balance, 5- and 20-m sprint times, isokinetic force and power at 60 and 300° · s(-1). Magnitude-based inferences were used to analyze the likelihood of an effect having a standardized (Cohen's) effect size exceeding 0.20. All interventions were generally superior to the control group. Olympic weightlifting was >80% likely to provide substantially better improvements than plyometric training for CMJ, horizontal jump, and 5- and 20-m sprint times, whereas >75% likely to substantially exceed traditional RT for balance and isokinetic power at 300° · s(-1). Plyometric training was >78% likely to elicit substantially better training adaptations than traditional RT for balance, isokinetic force at 60 and 300° · s(-1), isokinetic power at 300° · s(-1), and 5- and 20-m sprints. Traditional RT only exceeded plyometric training for BMI and isokinetic power at 60° · s(-1). Hence, OWL and plyometrics can provide similar or greater performance adaptations for children. It is recommended that any of the 3 training modalities can be implemented under professional supervision with proper training progressions to enhance training adaptations in children.

  13. Ethnic differences in the effects of media on body image: the effects of priming with ethnically different or similar models. (United States)

    Bruns, Gina L; Carter, Michele M


    Media exposure has been positively correlated with body dissatisfaction. While body image concerns are common, being African American has been found to be a protective factor in the development of body dissatisfaction. Participants either viewed ten advertisements showing 1) ethnically-similar thin models; 2) ethnically-different thin models; 3) ethnically-similar plus-sized models; and 4) ethnically-diverse plus-sized models. Following exposure, body image was measured. African American women had less body dissatisfaction than Caucasian women. Ethnically-similar thin-model conditions did not elicit greater body dissatisfaction scores than ethnically-different thin or plus-sized models nor did the ethnicity of the model impact ratings of body dissatisfaction for women of either race. There were no differences among the African American women exposed to plus-sized versus thin models. Among Caucasian women exposure to plus-sized models resulted in greater body dissatisfaction than exposure to thin models. Results support existing literature that African American women experience less body dissatisfaction than Caucasian women even following exposure to an ethnically-similar thin model. Additionally, women exposed to plus-sized model conditions experienced greater body dissatisfaction than those shown thin models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Testing the Effects of Team Processes on Team Member Schema Similarity and Team Performance: Examination of the Team Member Schema Similarity Model

    National Research Council Canada - National Science Library

    Rentsch, Joan


    .... Team membership influences and team interaction processes were examined as antecedents to team member teamwork schema similarity, which was conceptualized as team member teamwork schema agreement and accuracy...

  15. A Model for Comparative Analysis of the Similarity between Android and iOS Operating Systems

    Directory of Open Access Journals (Sweden)

    Lixandroiu R.


    Full Text Available Due to recent expansion of mobile devices, in this article we try to do an analysis of two of the most used mobile OSS. This analysis is made on the method of calculating Jaccard's similarity coefficient. To complete the analysis, we developed a hierarchy of factors in evaluating OSS. Analysis has shown that the two OSS are similar in terms of functionality, but there are a number of factors that weighted make a difference.


    Directory of Open Access Journals (Sweden)

    Silvia Cristina Sobottka Rolim de MOURA


    Full Text Available A demanda de creme de leite UHT tem aumentado significativamente. Diversas empresas diversificaram e aumentaram sua produção, visto que o consumidor, cada vez mais exigente, almeja cremes com ampla faixa de teor de gordura. O objetivo do presente trabalho foi determinar a densidade, viscosidade aparente e difusividade térmica, de soluções modelo similares a creme de leite, na faixa de temperatura de 30 a 70°C, estudando a influência do teor de gordura e da temperatura nas propriedades físicas dos produtos. O delineamento estatístico aplicado foi o planejamento 3X5, usando níveis de teor de gordura e temperatura fixos em 15%, 25% e 35%; 30°C, 40°C, 50°C, 60°C e 70°C, respectivamente (STATISTICA 6.0. Manteve-se constante a quantidade de carboidrato e de proteína, ambos em 3%. A densidade foi determinada pelo método de deslocamento de fluidos em picnômetro; a difusividade térmica com base no método de Dickerson e a viscosidade aparente foi determinada em reômetro Rheotest 2.1. Os resultados de cada propriedade foram analisados através de método de superfície de resposta. No caso destas propriedades, os dados obtidos apresentaram resultados significativos, indicando que o modelo representou de forma confiável a variação destas propriedades com a variação da gordura (% e da temperatura (°C.The requirement of UHT cream has been increased considerably. Several industries varied and increased their production, since consumers, more and more exigent, are demanding creams with a wide range of fat content. The objective of the present research was to determine the density, viscosity and thermal diffusivity of model solutions similar to cream. The range of temperature varied from 30°C to 70°C in order to study the influence of fat content and temperature in the physical properties of cream. The statistic method applied was the factorial 3X5 planning, with levels of fat content and temperature fixed in 15%, 25% and 35%; 30

  17. The Sustainable Improvement and Innovation Model


    Clark, Richard A.; Timms, Janice; Parnell, Peter F.; Griffith, Garry R.


    The Beef CRC's 'Sustainable Beef Profit Partnerships' (BPP) project is built around the Sustainable Improvement and Innovation (SI&I) Model – a model for the design, leadership and management of projects to achieve rapid and sustained improvement and innovation, and accelerated adoption. The model is implemented through a systemic approach to project design, and the development of a number of integrated strategies to guide the targeting of priority outcomes and work plans. The emphasis is o...

  18. Approximating a similarity matrix by a latent class model: A reappraisal of additive fuzzy clustering

    NARCIS (Netherlands)

    Braak, ter C.J.F.; Kourmpetis, Y.I.A.; Kiers, H.A.L.; Bink, M.C.A.M.


    Let Q be a given n×n square symmetric matrix of nonnegative elements between 0 and 1, similarities. Fuzzy clustering results in fuzzy assignment of individuals to K clusters. In additive fuzzy clustering, the n×K fuzzy memberships matrix P is found by least-squares approximation of the off-diagonal

  19. Approximating a similarity matrix by a latent class model : A reappraisal of additive fuzzy clustering

    NARCIS (Netherlands)

    ter Braak, Cajo J. F.; Kourmpetis, Yiannis; Kiers, Henk A. L.; Bink, Marco C. A. M.


    Let Q be a given n x n square symmetric matrix of nonnegative elements between 0 and 1, e.g. similarities. Fuzzy clustering results in fuzzy assignment of individuals to K clusters. In additive fuzzy clustering, the n x K fuzzy memberships matrix P is found by least-squares approximation of the

  20. Molecular Quantum Similarity Measures from Fermi hole Densities: Modeling Hammett Sigma Constants

    Czech Academy of Sciences Publication Activity Database

    Girónes, X.; Ponec, Robert


    Roč. 46, č. 3 (2006), s. 1388-1393 ISSN 1549-9596 Grant - others:SMCT(ES) SAF2000/0223/C03/01 Institutional research plan: CEZ:AV0Z40720504 Keywords : molecula quantum similarity measures * fermi hole densities * substituent effect Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.423, year: 2006

  1. Consequences of team charter quality: Teamwork mental model similarity and team viability in engineering design student teams (United States)

    Conway Hughston, Veronica

    Since 1996 ABET has mandated that undergraduate engineering degree granting institutions focus on learning outcomes such as professional skills (i.e. solving unstructured problems and working in teams). As a result, engineering curricula were restructured to include team based learning---including team charters. Team charters were diffused into engineering education as one of many instructional activities to meet the ABET accreditation mandates. However, the implementation and execution of team charters into engineering team based classes has been inconsistent and accepted without empirical evidence of the consequences. The purpose of the current study was to investigate team effectiveness, operationalized as team viability, as an outcome of team charter implementation in an undergraduate engineering team based design course. Two research questions were the focus of the study: a) What is the relationship between team charter quality and viability in engineering student teams, and b) What is the relationship among team charter quality, teamwork mental model similarity, and viability in engineering student teams? Thirty-eight intact teams, 23 treatment and 15 comparison, participated in the investigation. Treatment teams attended a team charter lecture, and completed a team charter homework assignment. Each team charter was assessed and assigned a quality score. Comparison teams did not join the lecture, and were not asked to create a team charter. All teams completed each data collection phase: a) similarity rating pretest; b) similarity posttest; and c) team viability survey. Findings indicate that team viability was higher in teams that attended the lecture and completed the charter assignment. Teams with higher quality team charter scores reported higher levels of team viability than teams with lower quality charter scores. Lastly, no evidence was found to support teamwork mental model similarity as a partial mediator of the team charter quality on team viability

  2. Complete description of all self-similar models driven by Lévy stable noise (United States)

    Weron, Aleksander; Burnecki, Krzysztof; Mercik, Szymon; Weron, Karina


    A canonical decomposition of H -self-similar Lévy symmetric α -stable processes is presented. The resulting components completely described by both deterministic kernels and the corresponding stochastic integral with respect to the Lévy symmetric α -stable motion are shown to be related to the dissipative and conservative parts of the dynamics. This result provides stochastic analysis tools for study the anomalous diffusion phenomena in the Langevin equation framework. For example, a simple computer test for testing the origins of self-similarity is implemented for four real empirical time series recorded from different physical systems: an ionic current flow through a single channel in a biological membrane, an energy of solar flares, a seismic electric signal recorded during seismic Earth activity, and foreign exchange rate daily returns.

  3. Patient-centred management in idiopathic pulmonary fibrosis: similar themes in three communication models. (United States)

    Wuyts, Wim A; Peccatori, Fedro A; Russell, Anne-Marie


    The progressive and highly variable course of idiopathic pulmonary fibrosis (IPF) can present patients and their families with various challenges at different points of the disease. Structured communication between the healthcare professional and the patient is vital to ensure the best possible support and treatment for the patient. While research in this area has been limited, an increasing number of studies are emerging that support the role of communication in patients with debilitating and fatal lung diseases. Communication models used in other conditions that share many challenges with IPF, such as cancer, provide important insights for developing specifically designed patient support and communications models in IPF. Three communication models will be described: 1) the patient-centred care model (for oncology); 2) the three pillars of care model (for IPF); and 3) the Brompton model of care (for interstitial lung disease). Themes common to all three models include comprehensive patient education, encouraged patient participation and an accessible healthcare system, all supported by a collaborative provider-patient relationship. The development of effective communication skills is an on-going process and it is recommended to examine communication models used in other chronic diseases. ©ERS 2014.

  4. Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments

    Directory of Open Access Journals (Sweden)

    Kamila M. Jozwik


    Full Text Available Recent advances in Deep convolutional Neural Networks (DNNs have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16. To create conceptual models, other human observers generated visual-feature labels (e.g., “eye” and category labels (e.g., “animal” for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from

  5. Possible Implications of a Vortex Gas Model and Self-Similarity for Tornadogenesis and Maintenance


    Dokken, Douglas P.; Scholz, Kurt; Shvartsman, Mikhail M.; Bělík, Pavel; Potvin, Corey; Dahl, Brittany; McGovern, Amy


    We describe tornadogenesis and maintenance using the 3-dimensional vortex gas model presented in Chorin (1994) and developed further in Flandoli and Gubinelli (2002). We suggest that high-energy, super-critical vortices in the sense of Benjamin (1962), that have been studied by Fiedler and Rotunno (1986), have negative temperature in the sense of Onsager (1949) play an important role in the model. We speculate that the formation of high-temperature vortices is related to the helicity inherite...

  6. Study for the design method of multi-agent diagnostic system to improve diagnostic performance for similar abnormality

    International Nuclear Information System (INIS)

    Minowa, Hirotsugu; Gofuku, Akio


    Accidents on industrial plants cause large loss on human, economic, social credibility. In recent, studies of diagnostic methods using techniques of machine learning such as support vector machine is expected to detect the occurrence of abnormality in a plant early and correctly. There were reported that these diagnostic machines has high accuracy to diagnose the operating state of industrial plant under mono abnormality occurrence. But the each diagnostic machine on the multi-agent diagnostic system may misdiagnose similar abnormalities as a same abnormality if abnormalities to diagnose increases. That causes that a single diagnostic machine may show higher diagnostic performance than one of multi-agent diagnostic system because decision-making considering with misdiagnosis is difficult. Therefore, we study the design method for multi-agent diagnostic system to diagnose similar abnormality correctly. This method aimed to realize automatic generation of diagnostic system where the generation process and location of diagnostic machines are optimized to diagnose correctly the similar abnormalities which are evaluated from the similarity of process signals by statistical method. This paper explains our design method and reports the result evaluated our method applied to the process data of the fast-breeder reactor Monju

  7. Improved models of dense anharmonic lattices

    Energy Technology Data Exchange (ETDEWEB)

    Rosenau, P., E-mail:; Zilburg, A.


    We present two improved quasi-continuous models of dense, strictly anharmonic chains. The direct expansion which includes the leading effect due to lattice dispersion, results in a Boussinesq-type PDE with a compacton as its basic solitary mode. Without increasing its complexity we improve the model by including additional terms in the expanded interparticle potential with the resulting compacton having a milder singularity at its edges. A particular care is applied to the Hertz potential due to its non-analyticity. Since, however, the PDEs of both the basic and the improved model are ill posed, they are unsuitable for a study of chains dynamics. Using the bond length as a state variable we manipulate its dispersion and derive a well posed fourth order PDE. - Highlights: • An improved PDE model of a Newtonian lattice renders compacton solutions. • Compactons are classical solutions of the improved model and hence amenable to standard analysis. • An alternative well posed model enables to study head on interactions of lattices' solitary waves. • Well posed modeling of Hertz potential.

  8. Modeling the angular motion dynamics of spacecraft with a magnetic attitude control system based on experimental studies and dynamic similarity (United States)

    Kulkov, V. M.; Medvedskii, A. L.; Terentyev, V. V.; Firsyuk, S. O.; Shemyakov, A. O.


    The problem of spacecraft attitude control using electromagnetic systems interacting with the Earth's magnetic field is considered. A set of dimensionless parameters has been formed to investigate the spacecraft orientation regimes based on dynamically similar models. The results of experimental studies of small spacecraft with a magnetic attitude control system can be extrapolated to the in-orbit spacecraft motion control regimes by using the methods of the dimensional and similarity theory.

  9. A general model for metabolic scaling in self-similar asymmetric networks.

    Directory of Open Access Journals (Sweden)

    Alexander Byers Brummer


    Full Text Available How a particular attribute of an organism changes or scales with its body size is known as an allometry. Biological allometries, such as metabolic scaling, have been hypothesized to result from selection to maximize how vascular networks fill space yet minimize internal transport distances and resistances. The West, Brown, Enquist (WBE model argues that these two principles (space-filling and energy minimization are (i general principles underlying the evolution of the diversity of biological networks across plants and animals and (ii can be used to predict how the resulting geometry of biological networks then governs their allometric scaling. Perhaps the most central biological allometry is how metabolic rate scales with body size. A core assumption of the WBE model is that networks are symmetric with respect to their geometric properties. That is, any two given branches within the same generation in the network are assumed to have identical lengths and radii. However, biological networks are rarely if ever symmetric. An open question is: Does incorporating asymmetric branching change or influence the predictions of the WBE model? We derive a general network model that relaxes the symmetric assumption and define two classes of asymmetrically bifurcating networks. We show that asymmetric branching can be incorporated into the WBE model. This asymmetric version of the WBE model results in several theoretical predictions for the structure, physiology, and metabolism of organisms, specifically in the case for the cardiovascular system. We show how network asymmetry can now be incorporated in the many allometric scaling relationships via total network volume. Most importantly, we show that the 3/4 metabolic scaling exponent from Kleiber's Law can still be attained within many asymmetric networks.

  10. Improved ionic model of liquid uranium dioxide

    NARCIS (Netherlands)

    Gryaznov, [No Value; Iosilevski, [No Value; Yakub, E; Fortov, [No Value; Hyland, GJ; Ronchi, C

    The paper presents a model for liquid uranium dioxide, obtained by improving a simplified ionic model, previously adopted to describe the equation of state of this substance [1]. A "chemical picture" is used for liquid UO2 of stoichiometric and non-stoichiometric composition. Several ionic species

  11. Understanding catchment behaviour through model concept improvement

    NARCIS (Netherlands)

    Fenicia, F.


    This thesis describes an approach to model development based on the concept of iterative model improvement, which is a process where by trial and error different hypotheses of catchment behaviour are progressively tested, and the understanding of the system proceeds through a combined process of

  12. A Continuous Improvement Capital Funding Model. (United States)

    Adams, Matt


    Describes a capital funding model that helps assess facility renewal needs in a way that minimizes resources while maximizing results. The article explains the sub-components of a continuous improvement capital funding model, including budgeting processes for finish renewal, building performance renewal, and critical outcome. (GR)

  13. Efficient Adoption and Assessment of Multiple Process Improvement Reference Models

    Directory of Open Access Journals (Sweden)

    Simona Jeners


    Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.

  14. Testing the model-observer similarity hypothesis with text-based worked examples

    NARCIS (Netherlands)

    Hoogerheide, V.; Loyens, S.M.M.; Jadi, Fedora; Vrins, Anna; van Gog, T.


    Example-based learning is a very effective and efficient instructional strategy for novices. It can be implemented using text-based worked examples that provide a written demonstration of how to perform a task, or (video) modelling examples in which an instructor (the ‘model’) provides a

  15. Improved Nonequilibrium Algebraic Model Of Turbulence (United States)

    Johnson, D. A.; Coakley, T. J.


    Blend of previous models predicts pressure distributions more accurately. Improved algebraic model represents some of time-averaged effects of turbulence in transonic flow of air over airfoil. Based partly on comparisons among various eddy-viscosity formulations for turbulence and partly on premise that law of wall more universally valid in immediate region of surface in presence of adverse gradient of pressure than mixing-length theory and original Johnson and King model.

  16. Can better modelling improve tokamak control?

    International Nuclear Information System (INIS)

    Lister, J.B.; Vyas, P.; Ward, D.J.; Albanese, R.; Ambrosino, G.; Ariola, M.; Villone, F.; Coutlis, A.; Limebeer, D.J.N.; Wainwright, J.P.


    The control of present day tokamaks usually relies upon primitive modelling and TCV is used to illustrate this. A counter example is provided by the successful implementation of high order SISO controllers on COMPASS-D. Suitable models of tokamaks are required to exploit the potential of modern control techniques. A physics based MIMO model of TCV is presented and validated with experimental closed loop responses. A system identified open loop model is also presented. An enhanced controller based on these models is designed and the performance improvements discussed. (author) 5 figs., 9 refs

  17. Estradiol and progesterone exhibit similar patterns of hepatic gene expression regulation in the bovine model.

    Directory of Open Access Journals (Sweden)

    Carla A Piccinato

    Full Text Available Female sex steroid hormones, estradiol-17β (E2-17β and progesterone (P4 regulate reproductive function and gene expression in a broad range of tissues. Given the central role of the liver in regulating homeostasis including steroid hormone metabolism, we sought to understand how E2-17β and P4 interact to affect global gene expression in liver. Ovariectomized cows (n = 8 were randomly assigned to 4 treatment groups applied in a replicated Latin Square design: 1 No hormone supplementation, 2 E2-17β treatment (ear implant, 3 P4 treatment (intravaginal inserts, and 4 E2-17β combined with P4. After 14 d of treatment, liver biopsies were collected, allowing 28 d intervals between periods. Changes in gene expression in the liver biopsies were monitored using bovine-specific arrays. Treatment with E2-17β altered expression of 479 genes, P4 472 genes, and combined treatment significantly altered expression of 468 genes. In total, 578 genes exhibited altered expression including a remarkable number (346 genes that responded similarly to E2-17β, P4, or combined treatment. Additional evidence for similar gene expression actions of E2-17ß and/or P4 were: principal component analysis placed almost every treatment array at a substantial distance from controls; Venn diagrams indicated overall treatment effects for most regulated genes; clustering analysis indicated the two major clusters had all treatments up-regulating (172 genes or down-regulating (173 genes expression. Thus, unexpectedly, common biological pathways were regulated by E2-17β and/or P4 in liver. This indicates that the mechanism of action of these steroid hormones in the liver might be either indirect or might occur through non-genomic pathways. This unusual pattern of gene expression in response to steroid hormones is consistent with the idea that there are classical and non-classical tissue-specific responses to steroid hormone actions. Future studies are needed to elucidate

  18. Quality of life and sleep quality are similarly improved after aquatic or dry-land aerobic training in patients with type 2 diabetes: A randomized clinical trial. (United States)

    S Delevatti, Rodrigo; Schuch, Felipe Barreto; Kanitz, Ana Carolina; Alberton, Cristine L; Marson, Elisa Corrêa; Lisboa, Salime Chedid; Pinho, Carolina Dertzbocher Feil; Bregagnol, Luciana Peruchena; Becker, Maríndia Teixeira; Kruel, Luiz Fernando M


    To compare the effects of two aerobic training models in water and on dry-land on quality of life, depressive symptoms and sleep quality in patients with type 2 diabetes. Randomized clinical trial. Thirty-five patients with type 2 diabetes were randomly assigned to aquatic aerobic training group (n=17) or dry-land aerobic training group (n=18). Exercise training length was of 12 weeks, performed in three weekly sessions (45min/session), with intensity progressing from 85% to 100% of heart rate of anaerobic threshold during interventions. All outcomes were evaluated at baseline and 12 weeks later. In per protocol analysis, physical and psychological domains of quality of life improved in both groups (p<0.05) without between-group differences. Overall quality of life and sleep quality improved in both groups (p<0.05), without between-group differences in per protocol and intention to treat analysis. No changes on depressive symptoms were observed in both groups at follow-up. Aerobic training in an aquatic environment provides similar effects to aerobic training in a dry-land environment on quality of life, depressive symptoms and sleep quality in patients with type 2 diabetes. Clinical trial reg. no. NCT01956357, Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  19. Improving the physiological realism of experimental models. (United States)

    Vinnakota, Kalyan C; Cha, Chae Y; Rorsman, Patrik; Balaban, Robert S; La Gerche, Andre; Wade-Martins, Richard; Beard, Daniel A; Jeneson, Jeroen A L


    The Virtual Physiological Human (VPH) project aims to develop integrative, explanatory and predictive computational models (C-Models) as numerical investigational tools to study disease, identify and design effective therapies and provide an in silico platform for drug screening. Ultimately, these models rely on the analysis and integration of experimental data. As such, the success of VPH depends on the availability of physiologically realistic experimental models (E-Models) of human organ function that can be parametrized to test the numerical models. Here, the current state of suitable E-models, ranging from in vitro non-human cell organelles to in vivo human organ systems, is discussed. Specifically, challenges and recent progress in improving the physiological realism of E-models that may benefit the VPH project are highlighted and discussed using examples from the field of research on cardiovascular disease, musculoskeletal disorders, diabetes and Parkinson's disease.

  20. Similarities between pinch analysis and classical blast furnace analysis methods. Possible improvement by synthesis. Paper no. IGEC-1-004

    International Nuclear Information System (INIS)

    Ryman, C.; Grip, C.-E.; Franck, P.-A.; Wikstrom, J.-O.


    Pinch analysis originated at UMIST in the 1970's. It has since then been used as a method for energy analysis and optimisation of industrial systems. The blast furnace process for reducing iron oxide to molten iron is a very important process unit in the metallurgical industry. It is a counter-current shaft process with a wide temperature range and gaseous, solid and liquid phases present in different zones. Because of this the blast furnace acts as a system of different sub-processes rather than a single process. The analysis tools developed to describe the process are in some respects similar to the tools of pinch analysis. The exchange between the two fields of knowledge has yet been negligible. In this paper the methods are described and compared. Problems, possibilities and advantages with an exchange and synthesis of knowledge are discussed. (author)

  1. An Improved Valuation Model for Technology Companies

    Directory of Open Access Journals (Sweden)

    Ako Doffou


    Full Text Available This paper estimates some of the parameters of the Schwartz and Moon (2001 model using cross-sectional data. Stochastic costs, future financing, capital expenditures and depreciation are taken into account. Some special conditions are also set: the speed of adjustment parameters are equal; the implied half-life of the sales growth process is linked to analyst forecasts; and the risk-adjustment parameter is inferred from the company’s observed stock price beta. The model is illustrated in the valuation of Google, Amazon, eBay, Facebook and Yahoo. The improved model is far superior to the Schwartz and Moon (2001 model.

  2. Linking accretion flow and particle acceleration in jets - II. Self-similar jet models with full relativistic MHD gravitational mass

    NARCIS (Netherlands)

    Polko, P.; Meier, D.L.; Markoff, S.


    We present a new, semi-analytic formalism to model the acceleration and collimation of relativistic jets in a gravitational potential. The gravitational energy density includes the kinetic, thermal and electromagnetic mass contributions. The solutions are close to self-similar throughout the

  3. Improving toxicity extrapolation using molecular sequence similarity: A case study of pyrethroids and the sodium ion channel (United States)

    A significant challenge in ecotoxicology has been determining chemical hazards to species with limited or no toxicity data. Currently, extrapolation tools like U.S. EPA’s Web-based Interspecies Correlation Estimation (Web-ICE; models categorize toxicity...

  4. A process improvement model for software verification and validation (United States)

    Callahan, John; Sabolish, George


    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.


    Directory of Open Access Journals (Sweden)



    Full Text Available ABSTRACT Contemporaneously, and with the successive paradigmatic revolutions inherent to management since the XVII century, we are witnessing a new era marked by the structural rupture in the way organizations are perceived. Market globalization, cemented by quick technological evolutions, associated with economic, cultural, political and social transformations characterize a reality where uncertainty is the only certainty for organizations and managers. Knowledge management has been interpreted by managers and academics as a viable alternative in a logic of creation and conversation of sustainable competitive advantages. However, there are several barriers to the implementation and development of knowledge management programs in organizations, with organizational culture being one of the most preponderant. In this sense, and in this article, we will analyze and compare The Knowledge Creation and Conversion Model proposed by Nonaka and Takeuchi (1995 and Quinn and Rohrbaugh's Competing Values Framework (1983, since both have convergent conceptual lines that can assist managers in different sectors to guide their organization in a perspective of productivity, quality and market competitiveness.

  6. Improvement of MARS code reflood model

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Chung, Bub-Dong


    A specifically designed heat transfer model for the reflood process which normally occurs at low flow and low pressure was originally incorporated in the MARS code. The model is essentially identical to that of the RELAP5/MOD3.3 code. The model, however, is known to have under-estimated the peak cladding temperature (PCT) with earlier turn-over. In this study, the original MARS code reflood model is improved. Based on the extensive sensitivity studies for both hydraulic and wall heat transfer models, it is found that the dispersed flow film boiling (DFFB) wall heat transfer is the most influential process determining the PCT, whereas the interfacial drag model most affects the quenching time through the liquid carryover phenomenon. The model proposed by Bajorek and Young is incorporated for the DFFB wall heat transfer. Both space grid and droplet enhancement models are incorporated. Inverted annular film boiling (IAFB) is modeled by using the original PSI model of the code. The flow transition between the DFFB and IABF, is modeled using the TRACE code interpolation. A gas velocity threshold is also added to limit the top-down quenching effect. Assessment calculations are performed for the original and modified MARS codes for the Flecht-Seaset test and RBHT test. Improvements are observed in terms of the PCT and quenching time predictions in the Flecht-Seaset assessment. In case of the RBHT assessment, the improvement over the original MARS code is found marginal. A space grid effect, however, is clearly seen from the modified version of the MARS code. (author)

  7. An Improved Model for the Turbulent PBL (United States)

    Cheng, Y.; Canuto, V. M.; Howard, A. M.; Hansen, James E. (Technical Monitor)


    Second order turbulence models of the Mellor and Yamada type have been widely used to simulate the PBL. It is however known that these models have several deficiencies. For example, they all predict a critical Richardson number which is about four times smaller than the Large Eddy Simulation (LES) data, they are unable to match the surface data, and they predict a boundary layer height lower than expected. In the present model, we show that these difficulties are all overcome by a single new physical input: the use of the most complete expression for both the pressure-velocity and the pressure-temperature correlations presently available. Each of the new terms represents a physical process that, was not accounted for by previous models. The new model is presented in three different levels according to Mellor and Yamada's terminology, with new, ready-to-use expressions for the turbulent, moments. We show that the new model reproduces several experimental and LES data better than previous models. As far as the PBL is concerned, we show that the model reproduces both the Kansas data as analyzed by Businger et al. in the context of Monin-Obukhov similarity theory for smaller Richardson numbers, as well as the LES and laboratory data up to Richardson numbers of order unity. We also show that the model yields a higher PBL height than the previous models.

  8. Improved reference models for middle atmosphere ozone (United States)

    Keating, G. M.; Pitts, M. C.; Chen, C.

    This paper describes the improvements introduced into the original version of ozone reference model of Keating and Young (1985, 1987) which is to be incorporated in the next COSPAR International Reference Atmosphere (CIRA). The ozone reference model will provide information on the global ozone distribution (including the ozone vertical structure as a function of month and latitude from 25 to 90 km) combining data from five recent satellite experiments: the Nimbus 7 LIMS, Nimbus 7 SBUV, AE-2 Stratospheric Aerosol Gas Experiment (SAGE), Solar Mesosphere Explorer (SME) UV Spectrometer, and SME 1.27 Micron Airglow. The improved version of the reference model uses reprocessed AE-2 SAGE data (sunset) and extends the use of SAGE data from 1981 to the 1981-1983 time period. Comparisons are presented between the results of this ozone model and various nonsatellite measurements at different levels in the middle atmosphere.

  9. Two Echelon Supply Chain Integrated Inventory Model for Similar Products: A Case Study (United States)

    Parjane, Manoj Baburao; Dabade, Balaji Marutirao; Gulve, Milind Bhaskar


    The purpose of this paper is to develop a mathematical model towards minimization of total cost across echelons in a multi-product supply chain environment. The scenario under consideration is a two-echelon supply chain system with one manufacturer, one retailer and M products. The retailer faces independent Poisson demand for each product. The retailer and the manufacturer are closely coupled in the sense that the information about any depletion in the inventory of a product at a retailer's end is immediately available to the manufacturer. Further, stock-out is backordered at the retailer's end. Thus the costs incurred at the retailer's end are the holding costs and the backorder costs. The manufacturer has only one processor which is time shared among the M products. Production changeover from one product to another entails a fixed setup cost and a fixed set up time. Each unit of a product has a production time. Considering the cost components, and assuming transportation time and cost to be negligible, the objective of the study is to minimize the expected total cost considering both the manufacturer and retailer. In the process two aspects are to be defined. Firstly, every time a product is taken up for production, how much of it (production batch size, q) should be produced. Considering a large value of q favors the manufacturer while a small value of q suits the retailers. Secondly, for a given batch size q, at what level of retailer's inventory (production queuing point), the batch size S of a product be taken up for production by the manufacturer. A higher value of S incurs more holding cost whereas a lower value of S increases the chance of backorder. A tradeoff between the holding and backorder cost must be taken into consideration while choosing an optimal value of S. It may be noted that due to multiple products and single processor, a product `taken' up for production may not get the processor immediately, and may have to wait in a queue. The `S

  10. Circumplex models for the similarity relationships between higher-order factors of personality and personality disorders: an empirical analysis. (United States)

    Pukrop, R; Sass, H; Steinmeyer, E M


    Similarity relationships between personality factors and personality disorders (PDs) are usually described within the conceptual framework of the "big five" model. Recently, two-dimensional circumplex models have been suggested as alternatives, such as the interpersonal circle, the multifacet circumplex, and the circumplex of premorbid personality types. The present study is an empirical investigation of the similarity relationships between the big five, the 11 DSM-III-R personality disorders and four subaffective disorders. This was performed in a sample of 165 psychiatric inpatients. We tested the extent to which the relationships could be adequately represented in two dimensions and which circumplex model can be supported by the empirical configuration. Results obtained by principal-components analysis (PCA) strongly confirm the circumplex of premorbid personality, and to some extent the multifacet circumplex. However, the interpersonal circle cannot be confirmed.

  11. Similarity reduction of a three-dimensional model of the far turbulent wake behind a towed body (United States)

    Schmidt, Alexey


    Semi-empirical three-dimensional model of turbulence in the approximation of the far turbulent wake behind a towed body in a passively stratified medium is considered. The sought-for quantities of the model are the velocity defect, kinetic turbulent energy, kinetic energy dissipation rate, averaged density defect and density fluctuation variance. The full group of transformations admitted by this model is found. The governing equations are reduced into ordinary differential equations by similarity reduction and method of the B-determining equations (BDE method). System of ordinary differential equations was solved numerically. The obtained solutions agree with experimental data.

  12. An improved second-order continuum traffic model (United States)

    Marques, W., Jr.; Velasco, R. M.


    We construct a second-order continuum traffic model by using an iterative procedure in order to derive a constitutive relation for the traffic pressure which is similar to the Navier-Stokes equation for ordinary fluids. Our second-order traffic model represents an improvement on the traffic model suggested by Kerner and Konhäuser since the iterative procedure introduces, in the constitutive relation for the traffic pressure, a density-dependent viscosity coefficient. By using a finite-difference scheme based on the Steger-Warming flux splitting, we investigate the solution of our improved second-order traffic model for specific problems like shock fronts in traffic and freeway-lane drop.

  13. An improved second-order continuum traffic model

    International Nuclear Information System (INIS)

    Marques, W Jr; Velasco, R M


    We construct a second-order continuum traffic model by using an iterative procedure in order to derive a constitutive relation for the traffic pressure which is similar to the Navier–Stokes equation for ordinary fluids. Our second-order traffic model represents an improvement on the traffic model suggested by Kerner and Konhäuser since the iterative procedure introduces, in the constitutive relation for the traffic pressure, a density-dependent viscosity coefficient. By using a finite-difference scheme based on the Steger–Warming flux splitting, we investigate the solution of our improved second-order traffic model for specific problems like shock fronts in traffic and freeway-lane drop

  14. School Improvement Model to Foster Student Learning (United States)

    Rulloda, Rudolfo Barcena


    Many classroom teachers are still using the traditional teaching methods. The traditional teaching methods are one-way learning process, where teachers would introduce subject contents such as language arts, English, mathematics, science, and reading separately. However, the school improvement model takes into account that all students have…

  15. Improving Representational Competence with Concrete Models (United States)

    Stieff, Mike; Scopelitis, Stephanie; Lira, Matthew E.; DeSutter, Dane


    Representational competence is a primary contributor to student learning in science, technology, engineering, and math (STEM) disciplines and an optimal target for instruction at all educational levels. We describe the design and implementation of a learning activity that uses concrete models to improve students' representational competence and…

  16. Similar effect of sodium nitroprusside and acetylsalicylic acid on antioxidant system improvement in mouse liver but not in the brain. (United States)

    Wróbel, Maria; Góralska, Joanna; Jurkowska, Halina; Sura, Piotr


    of H 2 S, a molecule with antioxidant properties. A similar effect was not observed in the brain. In case of both sodium nitroprusside and aspirin administration, homeostasis of sulfane sulfur level was noted in both the liver and brain. Copyright © 2017 Elsevier B.V. and Société Française de Biochimie et Biologie Moléculaire (SFBBM). All rights reserved.

  17. Improved double Q2 rescaling model

    International Nuclear Information System (INIS)

    Gao Yonghua


    The authors present an improved double Q 2 rescaling model. Based on this condition of the nuclear momentum conservation, the authors have found a Q 2 rescaling parameters' formula of the model, where authors have established the connection between the Q 2 rescaling parameter ζ i (i = v, s, g) and the mean binding energy in nucleus. By using this model, the authors coned explain the experimental data of the EMC effect in the whole x region, the nuclear Drell-Yan process and J/Ψ photoproduction process

  18. An improved model of equatorial scintillation (United States)

    Secan, J. A.; Bussey, R. M.; Fremouw, E. J.; Basu, Sa.


    One of the main limitations of the modeling work that went into the equatorial section of the Wideband ionospheric scintillation model (WBMOD) was that the data set used in the modeling was limited to two stations near the dip equator (Ancon, Peru, and Kwajalein Island, in the North Pacific Ocean) at two fixed local times (nominally 1000 and 2200). Over the past year this section of the WBMOD model has been replaced by a model developed using data from three additional stations (Ascension Island, in the South Atlantic Ocean, Huancayo, Peru, and Manila, Phillipines; data collected under the auspices of the USAF Phillips Laboratory Geophysics Directorate) which provide a greater diversity in both latitude and longitude, as well as cover the entire day. The new model includes variations with latitude, local time, longitude, season, solar epoch, and geomagnetic activity levels. The way in which the irregularity strength parameter CkL is modeled has also been changed. The new model provides the variation of the full probability distribution function (PDF) of log (CkL) rather than simply the average of log (CkL). This permits the user to specify a threshold on scintillation level, and the model will calculate the percent of the time that scintillation will exceed that level in the user-specified scenario. It will also permit calculation of scintillation levels at a user-specified percentile. A final improvement to the WBMOD model is the implementation of a new theory for calculating S4 on a two-way channel.

  19. Analysis and improvement of Brinkman lattice Boltzmann schemes: bulk, boundary, interface. Similarity and distinctness with finite elements in heterogeneous porous media. (United States)

    Ginzburg, Irina; Silva, Goncalo; Talon, Laurent


    This work focuses on the numerical solution of the Stokes-Brinkman equation for a voxel-type porous-media grid, resolved by one to eight spacings per permeability contrast of 1 to 10 orders in magnitude. It is first analytically demonstrated that the lattice Boltzmann method (LBM) and the linear-finite-element method (FEM) both suffer from the viscosity correction induced by the linear variation of the resistance with the velocity. This numerical artefact may lead to an apparent negative viscosity in low-permeable blocks, inducing spurious velocity oscillations. The two-relaxation-times (TRT) LBM may control this effect thanks to free-tunable two-rates combination Λ. Moreover, the Brinkman-force-based BF-TRT schemes may maintain the nondimensional Darcy group and produce viscosity-independent permeability provided that the spatial distribution of Λ is fixed independently of the kinematic viscosity. Such a property is lost not only in the BF-BGK scheme but also by "partial bounce-back" TRT gray models, as shown in this work. Further, we propose a consistent and improved IBF-TRT model which vanishes viscosity correction via simple specific adjusting of the viscous-mode relaxation rate to local permeability value. This prevents the model from velocity fluctuations and, in parallel, improves for effective permeability measurements, from porous channel to multidimensions. The framework of our exact analysis employs a symbolic approach developed for both LBM and FEM in single and stratified, unconfined, and bounded channels. It shows that even with similar bulk discretization, BF, IBF, and FEM may manifest quite different velocity profiles on the coarse grids due to their intrinsic contrasts in the setting of interface continuity and no-slip conditions. While FEM enforces them on the grid vertexes, the LBM prescribes them implicitly. We derive effective LBM continuity conditions and show that the heterogeneous viscosity correction impacts them, a property also shared

  20. Improvements in continuum modeling for biomolecular systems (United States)

    Yu, Qiao; Ben-Zhuo, Lu


    Modeling of biomolecular systems plays an essential role in understanding biological processes, such as ionic flow across channels, protein modification or interaction, and cell signaling. The continuum model described by the Poisson- Boltzmann (PB)/Poisson-Nernst-Planck (PNP) equations has made great contributions towards simulation of these processes. However, the model has shortcomings in its commonly used form and cannot capture (or cannot accurately capture) some important physical properties of the biological systems. Considerable efforts have been made to improve the continuum model to account for discrete particle interactions and to make progress in numerical methods to provide accurate and efficient simulations. This review will summarize recent main improvements in continuum modeling for biomolecular systems, with focus on the size-modified models, the coupling of the classical density functional theory and the PNP equations, the coupling of polar and nonpolar interactions, and numerical progress. Project supported by the National Natural Science Foundation of China (Grant No. 91230106) and the Chinese Academy of Sciences Program for Cross & Cooperative Team of the Science & Technology Innovation.

  1. Improved Inference of Heteroscedastic Fixed Effects Models

    Directory of Open Access Journals (Sweden)

    Afshan Saeed


    Full Text Available Heteroscedasticity is a stern problem that distorts estimation and testing of panel data model (PDM. Arellano (1987 proposed the White (1980 estimator for PDM with heteroscedastic errors but it provides erroneous inference for the data sets including high leverage points. In this paper, our attempt is to improve heteroscedastic consistent covariance matrix estimator (HCCME for panel dataset with high leverage points. To draw robust inference for the PDM, our focus is to improve kernel bootstrap estimators, proposed by Racine and MacKinnon (2007. The Monte Carlo scheme is used for assertion of the results.

  2. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati


    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  3. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati


    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  4. A improved Network Security Situation Awareness Model

    Directory of Open Access Journals (Sweden)

    Li Fangwei


    Full Text Available In order to reflect the situation of network security assessment performance fully and accurately, a new network security situation awareness model based on information fusion was proposed. Network security situation is the result of fusion three aspects evaluation. In terms of attack, to improve the accuracy of evaluation, a situation assessment method of DDoS attack based on the information of data packet was proposed. In terms of vulnerability, a improved Common Vulnerability Scoring System (CVSS was raised and maked the assessment more comprehensive. In terms of node weights, the method of calculating the combined weights and optimizing the result by Sequence Quadratic Program (SQP algorithm which reduced the uncertainty of fusion was raised. To verify the validity and necessity of the method, a testing platform was built and used to test through evaluating 2000 DAPRA data sets. Experiments show that the method can improve the accuracy of evaluation results.

  5. A study of the predictive model on the user reaction time using the information amount and similarity

    International Nuclear Information System (INIS)

    Lee, Sungjin; Heo, Gyunyoung; Chang, S.H.


    Human operations through a user interface are divided into two types. The one is the single operation that is performed on a static interface. The other is the sequential operation that achieves a goal by handling several displays through operator's navigation in the crt-based console. Sequential operation has similar meaning with continuous task. Most operations in recently developed computer applications correspond to the sequential operation, and the single operation can be considered as a part of the sequential operation. In the area of HCI (human computer interaction) evaluation, the Hick-Hyman law counts as the most powerful theory. The most important factor in the equation of Hick-Hyman law about choice reaction time is the quantified amount of information conveyed by a statement, stimulus, or event. Generally, we can expect that if there are some similarities between a series of interfaces, human operator is able to use his attention resource effectively. That is the performance of human operator is increased by the similarity. The similarity may be able to affect the allocation of attention resource based on separate STSS (short-term sensory store) and long-term memory. There are theories related with this concept, which are task switching paradigm and the law of practice. However, it is not easy to explain the human operator performance with only the similarity or the information amount. There are few theories to explain the performance with the combination of the similarity and the information amount. The objective of this paper is to purpose and validate the quantitative and predictive model on the user reaction time in CRT-based displays. Another objective is to validate various theories related with human cognition and perception, which are Hick-Hyman law and the law of practice as representative theories. (author)

  6. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono


    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  7. A compressible wall-adapting similarity mixed model for large-eddy simulation of the impinging round jet (United States)

    Lodato, Guido; Vervisch, Luc; Domingo, Pascale


    Wall-jet interaction is studied with large-eddy simulation (LES) in which a mixed-similarity subgrid scale (SGS) closure is combined with the wall-adapting local eddy-viscosity (WALE) model for the eddy-viscosity term. The macrotemperature and macropressure are introduced to deduce a weakly compressible form of the mixed-similarity model, and the relevant formulation for the energy equation is deduced accordingly. LES prediction capabilities are assessed by comparing flow statistical properties against experiment of an unconfined impinging round jet at Reynolds numbers of 23 000 and 70 000. To quantify the benefit of the proposed WALE-similarity mixed model, the lower Reynolds number simulations are also performed using the standard WALE and Lagrangian dynamic Smagorinsky approaches. The unsteady compressible Navier-Stokes equations are integrated over 2.9 M, 3.5 M, and 5.5 M node Cartesian grids with an explicit fourth-order finite volume solver. Nonreflecting boundary conditions are enforced using a methodology accounting for the three-dimensional character of the turbulent flow at boundaries. A correct wall scaling is achieved from the combination of similarity and WALE approaches; for this wall-jet interaction, the SGS closure terms can be computed in the near-wall region without the necessity of resorting to additional specific treatments. The possible impact of turbulent energy backscatter in such flow configurations is also addressed. It is found that, for the present configuration, the correct reproduction of reverse energy transfer plays a key role in the estimation of near-wall statistics, especially when the viscous sublayer is not properly resolved.

  8. Improving the transferability of hydrological model parameters under changing conditions (United States)

    Huang, Yingchun; Bárdossy, András


    Hydrological models are widely utilized to describe catchment behaviors with observed hydro-meteorological data. Hydrological process may be considered as non-stationary under the changing climate and land use conditions. An applicable hydrological model should be able to capture the essential features of the target catchment and therefore be transferable to different conditions. At present, many model applications based on the stationary assumptions are not sufficient for predicting further changes or time variability. The aim of this study is to explore new model calibration methods in order to improve the transferability of model parameters. To cope with the instability of model parameters calibrated on catchments in non-stationary conditions, we investigate the idea of simultaneously calibration on streamflow records for the period with dissimilar climate characteristics. In additional, a weather based weighting function is implemented to adjust the calibration period to future trends. For regions with limited data and ungauged basins, the common calibration was applied by using information from similar catchments. Result shows the model performance and transfer quantity could be well improved via common calibration. This model calibration approach will be used to enhance regional water management and flood forecasting capabilities.

  9. Visual Similarity of Words Alone Can Modulate Hemispheric Lateralization in Visual Word Recognition: Evidence From Modeling Chinese Character Recognition. (United States)

    Hsiao, Janet H; Cheung, Kit


    In Chinese orthography, the most common character structure consists of a semantic radical on the left and a phonetic radical on the right (SP characters); the minority, opposite arrangement also exists (PS characters). Recent studies showed that SP character processing is more left hemisphere (LH) lateralized than PS character processing. Nevertheless, it remains unclear whether this is due to phonetic radical position or character type frequency. Through computational modeling with artificial lexicons, in which we implement a theory of hemispheric asymmetry in perception but do not assume phonological processing being LH lateralized, we show that the difference in character type frequency alone is sufficient to exhibit the effect that the dominant type has a stronger LH lateralization than the minority type. This effect is due to higher visual similarity among characters in the dominant type than the minority type, demonstrating the modulation of visual similarity of words on hemispheric lateralization. Copyright © 2015 Cognitive Science Society, Inc.

  10. General relativistic self-similar waves that induce an anomalous acceleration into the standard model of cosmology

    CERN Document Server

    Smoller, Joel


    We prove that the Einstein equations in Standard Schwarzschild Coordinates close to form a system of three ordinary differential equations for a family of spherically symmetric, self-similar expansion waves, and the critical ($k=0$) Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology (FRW), is embedded as a single point in this family. Removing a scaling law and imposing regularity at the center, we prove that the family reduces to an implicitly defined one parameter family of distinct spacetimes determined by the value of a new {\\it acceleration parameter} $a$, such that $a=1$ corresponds to FRW. We prove that all self-similar spacetimes in the family are distinct from the non-critical $k\

  11. Improving PSA quality of KSNP PSA model

    International Nuclear Information System (INIS)

    Yang, Joon Eon; Ha, Jae Joo


    In the RIR (Risk-informed Regulation), PSA (Probabilistic Safety Assessment) plays a major role because it provides overall risk insights for the regulatory body and utility. Therefore, the scope, the level of details and the technical adequacy of PSA, i.e. the quality of PSA is to be ensured for the successful RIR. To improve the quality of Korean PSA, we evaluate the quality of the KSNP (Korean Standard Nuclear Power Plant) internal full-power PSA model based on the 'ASME PRA Standard' and the 'NEI PRA Peer Review Process Guidance.' As a working group, PSA experts of the regulatory body and industry also participated in the evaluation process. It is finally judged that the overall quality of the KSNP PSA is between the ASME Standard Capability Category I and II. We also derive some items to be improved for upgrading the quality of the PSA up to the ASME Standard Capability Category II. In this paper, we show the result of quality evaluation, and the activities to improve the quality of the KSNP PSA model

  12. Improving Bioenergy Crops through Dynamic Metabolic Modeling

    Directory of Open Access Journals (Sweden)

    Mojdeh Faraji


    Full Text Available Enormous advances in genetics and metabolic engineering have made it possible, in principle, to create new plants and crops with improved yield through targeted molecular alterations. However, while the potential is beyond doubt, the actual implementation of envisioned new strains is often difficult, due to the diverse and complex nature of plants. Indeed, the intrinsic complexity of plants makes intuitive predictions difficult and often unreliable. The hope for overcoming this challenge is that methods of data mining and computational systems biology may become powerful enough that they could serve as beneficial tools for guiding future experimentation. In the first part of this article, we review the complexities of plants, as well as some of the mathematical and computational methods that have been used in the recent past to deepen our understanding of crops and their potential yield improvements. In the second part, we present a specific case study that indicates how robust models may be employed for crop improvements. This case study focuses on the biosynthesis of lignin in switchgrass (Panicum virgatum. Switchgrass is considered one of the most promising candidates for the second generation of bioenergy production, which does not use edible plant parts. Lignin is important in this context, because it impedes the use of cellulose in such inedible plant materials. The dynamic model offers a platform for investigating the pathway behavior in transgenic lines. In particular, it allows predictions of lignin content and composition in numerous genetic perturbation scenarios.

  13. Mathematical evaluation of similarity factor using various weighing approaches on aceclofenac marketed formulations by model-independent method. (United States)

    Soni, T G; Desai, J U; Nagda, C D; Gandhi, T R; Chotai, N P


    The US Food and Drug Administration's (FDA's) guidance for industry on dissolution testing of immediate-release solid oral dosage forms describes that drug dissolution may be the rate limiting step for drug absorption in the case of low solubility/high permeability drugs (BCS class II drugs). US FDA Guidance describes the model-independent mathematical approach proposed by Moore and Flanner for calculating a similarity factor (f2) of dissolution across a suitable time interval. In the present study, the similarity factor was calculated on dissolution data of two marketed aceclofenac tablets (a BCS class II drug) using various weighing approaches proposed by Gohel et al. The proposed approaches were compared with a conventional approach (W = 1). On the basis of consideration of variability, preference is given in the order of approach 3 > approach 2 > approach 1 as approach 3 considers batch-to-batch as well as within-samples variability and shows best similarity profile. Approach 2 considers batch-to batch variability with higher specificity than approach 1.

  14. An Improved MUSIC Model for Gibbsite Surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott C.; Bickmore, Barry R.; Tadanier, Christopher J.; Rosso, Kevin M.


    Here we use gibbsite as a model system with which to test a recently published, bond-valence method for predicting intrinsic pKa values for surface functional groups on oxides. At issue is whether the method is adequate when valence parameters for the functional groups are derived from ab initio structure optimization of surfaces terminated by vacuum. If not, ab initio molecular dynamics (AIMD) simulations of solvated surfaces (which are much more computationally expensive) will have to be used. To do this, we had to evaluate extant gibbsite potentiometric titration data that where some estimate of edge and basal surface area was available. Applying BET and recently developed atomic force microscopy methods, we found that most of these data sets were flawed, in that their surface area estimates were probably wrong. Similarly, there may have been problems with many of the titration procedures. However, one data set was adequate on both counts, and we applied our method of surface pKa int prediction to fitting a MUSIC model to this data with considerable success—several features of the titration data were predicted well. However, the model fit was certainly not perfect, and we experienced some difficulties optimizing highly charged, vacuum-terminated surfaces. Therefore, we conclude that we probably need to do AIMD simulations of solvated surfaces to adequately predict intrinsic pKa values for surface functional groups.

  15. Does model performance improve with complexity? A case study with three hydrological models (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano


    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  16. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.


    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  17. A Highly Similar Mathematical Model for Cerebral Blood Flow Velocity in Geriatric Patients with Suspected Cerebrovascular Disease (United States)

    Liu, Bo; Li, Qi; Wang, Jisheng; Xiang, Hu; Ge, Hong; Wang, Hui; Xie, Peng


    Cerebral blood flow velocity(CBFV) is an important parameter for study of cerebral hemodynamics. However, a simple and highly similar mathematical model has not yet been established for analyzing CBFV. To alleviate this issue, through TCD examination in 100 geriatric patients with suspected cerebrovascular disease (46 males and 54 females), we established a representative eighth-order Fourier function Vx(t) that simulates the CBFV. The measured TCD waveforms were compared to those derived from Vx(t), an illustrative Kolmogorov-Smirnov test was employed to determine the validity. The results showed that the TCD waves could been reconstructed for patients with different CBFVs by implementing their variable heart rates and the formulated maximum/minimum of Vx(t). Comparisons between derived and measured TCD waveforms suggest that the two waveforms are very similar. The results confirm that CBFV can be well-modeled through an eighth-order Fourier function. This function Vx(t) can be used extensively for a prospective study of cerebral hemodynamics in geriatric patients with suspected cerebrovascular disease.

  18. Improving Marine Ecosystem Models with Biochemical Tracers (United States)

    Pethybridge, Heidi R.; Choy, C. Anela; Polovina, Jeffrey J.; Fulton, Elizabeth A.


    Empirical data on food web dynamics and predator-prey interactions underpin ecosystem models, which are increasingly used to support strategic management of marine resources. These data have traditionally derived from stomach content analysis, but new and complementary forms of ecological data are increasingly available from biochemical tracer techniques. Extensive opportunities exist to improve the empirical robustness of ecosystem models through the incorporation of biochemical tracer data and derived indices, an area that is rapidly expanding because of advances in analytical developments and sophisticated statistical techniques. Here, we explore the trophic information required by ecosystem model frameworks (species, individual, and size based) and match them to the most commonly used biochemical tracers (bulk tissue and compound-specific stable isotopes, fatty acids, and trace elements). Key quantitative parameters derived from biochemical tracers include estimates of diet composition, niche width, and trophic position. Biochemical tracers also provide powerful insight into the spatial and temporal variability of food web structure and the characterization of dominant basal and microbial food web groups. A major challenge in incorporating biochemical tracer data into ecosystem models is scale and data type mismatches, which can be overcome with greater knowledge exchange and numerical approaches that transform, integrate, and visualize data.

  19. Modeling soil water content for vegetation modeling improvement (United States)

    Cianfrani, Carmen; Buri, Aline; Zingg, Barbara; Vittoz, Pascal; Verrecchia, Eric; Guisan, Antoine


    Soil water content (SWC) is known to be important for plants as it affects the physiological processes regulating plant growth. Therefore, SWC controls plant distribution over the Earth surface, ranging from deserts and grassland to rain forests. Unfortunately, only a few data on SWC are available as its measurement is very time consuming and costly and needs specific laboratory tools. The scarcity of SWC measurements in geographic space makes it difficult to model and spatially project SWC over larger areas. In particular, it prevents its inclusion in plant species distribution model (SDMs) as predictor. The aims of this study were, first, to test a new methodology allowing problems of the scarcity of SWC measurements to be overpassed and second, to model and spatially project SWC in order to improve plant SDMs with the inclusion of SWC parameter. The study was developed in four steps. First, SWC was modeled by measuring it at 10 different pressures (expressed in pF and ranging from pF=0 to pF=4.2). The different pF represent different degrees of soil water availability for plants. An ensemble of bivariate models was built to overpass the problem of having only a few SWC measurements (n = 24) but several predictors to include in the model. Soil texture (clay, silt, sand), organic matter (OM), topographic variables (elevation, aspect, convexity), climatic variables (precipitation) and hydrological variables (river distance, NDWI) were used as predictors. Weighted ensemble models were built using only bivariate models with adjusted-R2 > 0.5 for each SWC at different pF. The second step consisted in running plant SDMs including modeled SWC jointly with the conventional topo-climatic variable used for plant SDMs. Third, SDMs were only run using the conventional topo-climatic variables. Finally, comparing the models obtained in the second and third steps allowed assessing the additional predictive power of SWC in plant SDMs. SWC ensemble models remained very good, with

  20. Robust Multiscale Modelling Of Two-Phase Steels On Heterogeneous Hardware Infrastructures By Using Statistically Similar Representative Volume Element

    Directory of Open Access Journals (Sweden)

    Rauch Ł.


    Full Text Available The coupled finite element multiscale simulations (FE2 require costly numerical procedures in both macro and micro scales. Attempts to improve numerical efficiency are focused mainly on two areas of development, i.e. parallelization/distribution of numerical procedures and simplification of virtual material representation. One of the representatives of both mentioned areas is the idea of Statistically Similar Representative Volume Element (SSRVE. It aims at the reduction of the number of finite elements in micro scale as well as at parallelization of the calculations in micro scale which can be performed without barriers. The simplification of computational domain is realized by transformation of sophisticated images of material microstructure into artificially created simple objects being characterized by similar features as their original equivalents. In existing solutions for two-phase steels SSRVE is created on the basis of the analysis of shape coefficients of hard phase in real microstructure and searching for a representative simple structure with similar shape coefficients. Optimization techniques were used to solve this task. In the present paper local strains and stresses are added to the cost function in optimization. Various forms of the objective function composed of different elements were investigated and used in the optimization procedure for the creation of the final SSRVE. The results are compared as far as the efficiency of the procedure and uniqueness of the solution are considered. The best objective function composed of shape coefficients, as well as of strains and stresses, was proposed. Examples of SSRVEs determined for the investigated two-phase steel using that objective function are demonstrated in the paper. Each step of SSRVE creation is investigated from computational efficiency point of view. The proposition of implementation of the whole computational procedure on modern High Performance Computing (HPC

  1. Comparing methods for single paragraph similarity analysis. (United States)

    Stone, Benjamin; Dennis, Simon; Kwantes, Peter J


    The focus of this paper is two-fold. First, similarities generated from six semantic models were compared to human ratings of paragraph similarity on two datasets-23 World Entertainment News Network paragraphs and 50 ABC newswire paragraphs. Contrary to findings on smaller textual units such as word associations (Griffiths, Tenenbaum, & Steyvers, 2007), our results suggest that when single paragraphs are compared, simple nonreductive models (word overlap and vector space) can provide better similarity estimates than more complex models (LSA, Topic Model, SpNMF, and CSM). Second, various methods of corpus creation were explored to facilitate the semantic models' similarity estimates. Removing numeric and single characters, and also truncating document length improved performance. Automated construction of smaller Wikipedia-based corpora proved to be very effective, even improving upon the performance of corpora that had been chosen for the domain. Model performance was further improved by augmenting corpora with dataset paragraphs. Copyright © 2010 Cognitive Science Society, Inc.

  2. An improved squirmer model for Volvox locomotion (United States)

    Pedley, Timothy


    We recently used the Lighthill-Blake envelope (or `squirmer') model for ciliary propulsion to predict the mean swimming speed U and angular velocity Ω of spherical Volvox colonies. Input was the measured flagellar beating patterns (a symplectic metachronal wave) of Volvox colonies with different radii a. The predictions were compared with independent measurements of U and Ω as functions of a, and proved to be substantial underestimates of both U and Ω, by about 80%, probably because the envelope model ignores the fact that, during the recovery stroke, most of a flagellum is much closer to the no-slip colony surface than during the power stroke. In consequence U and Ω will be proportional to the beating amplitude ɛ not to ɛ2 as in the Lighthill-Blake theory. A new model is proposed, based on a shear-stress (not velocity) distribution (cf) that is applied at a smaller radius in the recovery stroke than in the power stroke. Agreement with experiment is greatly improved.

  3. Improved choked flow model for MARS code

    International Nuclear Information System (INIS)

    Chung, Moon Sun; Lee, Won Jae; Ha, Kwi Seok; Hwang, Moon Kyu


    Choked flow calculation is improved by using a new sound speed criterion for bubbly flow that is derived by the characteristic analysis of hyperbolic two-fluid model. This model was based on the notion of surface tension for the interfacial pressure jump terms in the momentum equations. Real eigenvalues obtained as the closed-form solution of characteristic polynomial represent the sound speed in the bubbly flow regime that agrees well with the existing experimental data. The present sound speed shows more reasonable result in the extreme case than the Nguyens did. The present choked flow criterion derived by the present sound speed is employed in the MARS code and assessed by using the Marviken choked flow tests. The assessment results without any adjustment made by some discharge coefficients demonstrate more accurate predictions of choked flow rate in the bubbly flow regime than those of the earlier choked flow calculations. By calculating the Typical PWR (SBLOCA) problem, we make sure that the present model can reproduce the reasonable transients of integral reactor system

  4. Similar Improvements in Patient-Reported Outcomes Among Rheumatoid Arthritis Patients Treated with Two Different Doses of Methotrexate in Combination with Adalimumab: Results From the MUSICA Trial. (United States)

    Kaeley, Gurjit S; MacCarter, Daryl K; Goyal, Janak R; Liu, Shufang; Chen, Kun; Griffith, Jennifer; Kupper, Hartmut; Garg, Vishvas; Kalabic, Jasmina


    In patients with rheumatoid arthritis (RA), combination treatment with methotrexate (MTX) and adalimumab is more effective than MTX monotherapy. From the patients' perspective, the impact of reduced MTX doses upon initiating adalimumab is not known. The objective was to evaluate the effects of low and high MTX doses in combination with adalimumab initiation on patient-reported outcomes (PROs), in MTX-inadequate responders (MTX-IR) with moderate-to-severe RA. MUSICA was a randomized, double-blind, controlled trial evaluating the efficacy of 7.5 or 20 mg/week MTX, in combination with adalimumab for 24 weeks in MTX-IR RA patients receiving prior MTX ≥ 15 mg/week for ≥ 12 weeks. PROs were recorded at each visit, including physical function, health-related quality-of-life, work productivity, quality-of-sleep, satisfaction with treatment medication, sexual impairment due to RA, patient global assessment of disease activity (PGA), and patient pain. Last observation carried forward was used to account for missing values. At baseline, patients in both MTX dosage groups had similar demographics, disease characteristics, and PRO scores. Overall, initiation of adalimumab led to significant improvements from baseline in the PROs assessed for both MTX dosage groups. Improvements in presenteeism from baseline were strongly correlated with corresponding improvements in SF-36 (vitality), pain, and physical function. Physical and mental well-being had a good correlation with improvement in sleep. Overall, improvements in disease activity from baseline were correlated with improvements in several PROs. The addition of adalimumab to MTX in MTX-IR patients with moderate-to-severe RA led to improvements in physical function, quality-of-life, work productivity, quality of sleep, satisfaction with treatment medication, and sexual impairment due to RA, regardless of the concomitant MTX dosage. AbbVie. identifier, NCT01185288.

  5. Improved regional climate modelling through dynamical downscaling

    International Nuclear Information System (INIS)

    Corney, Stuart; Grose, Michael; Holz, Greg; White, Chris; Bennett, James; Gaynor, Suzie; Bindoff, Nathan; Katzfey, Jack; McGregor, John


    Coupled Ocean-Atmosphere General Circulation Models (GCMs) provide the best estimates for assessing potential changes to our climate on a global scale out to the end of this century. Because coupled GCMs have a fairly coarse resolution they do not provide a detailed picture of climate (and climate change) at the local scale. Tasmania, due to its diverse geography and range of climate over a small area is a particularly difficult region for drawing conclusions regarding climate change when relying solely on GCMs. The foundation of the Climate Futures for Tasmania project is to take the output produced by multiple GCMs, using multiple climate change scenarios, and use this output as input into the Conformal Cubic Atmospheric Model (CCAM) to downscale the GCM output. CCAM is a full atmospheric global general circulation model, formulated using a conformal-cubic grid that covers the globe but can be stretched to provide higher resolution in the area of interest (Tasmania). By modelling the atmosphere at a much finer scale than is possible using a coupled GCM we can more accurately capture the processes that drive Tasmania's weather/climate, and thus can more clearly answer the question of how Tasmania's climate will change in the future. We present results that show the improvements in capturing the local-scale climate and climate drivers that can be achieved through downscaling, when compared to a gridded observational data set. The underlying assumption of this work is that a better simulated current climatology will also produce a more credible climate change signal.

  6. Demographic modelling with whole-genome data reveals parallel origin of similar Pundamilia cichlid species after hybridization. (United States)

    Meier, Joana I; Sousa, Vitor C; Marques, David A; Selz, Oliver M; Wagner, Catherine E; Excoffier, Laurent; Seehausen, Ole


    Modes and mechanisms of speciation are best studied in young species pairs. In older taxa, it is increasingly difficult to distinguish what happened during speciation from what happened after speciation. Lake Victoria cichlids in the genus Pundamilia encompass a complex of young species and polymorphic populations. One Pundamilia species pair, P. pundamilia and P. nyererei, is particularly well suited to study speciation because sympatric population pairs occur with different levels of phenotypic differentiation and reproductive isolation at different rocky islands within the lake. Genetic distances between allopatric island populations of the same nominal species often exceed those between the sympatric species. It thus remained unresolved whether speciation into P. nyererei and P. pundamilia occurred once, followed by geographical range expansion and interspecific gene flow in local sympatry, or if the species pair arose repeatedly by parallel speciation. Here, we use genomic data and demographic modelling to test these alternative evolutionary scenarios. We demonstrate that gene flow plays a strong role in shaping the observed patterns of genetic similarity, including both gene flow between sympatric species and gene flow between allopatric populations, as well as recent and early gene flow. The best supported model for the origin of P. pundamilia and P. nyererei population pairs at two different islands is one where speciation happened twice, whereby the second speciation event follows shortly after introgression from an allopatric P. nyererei population that arose earlier. Our findings support the hypothesis that very similar species may arise repeatedly, potentially facilitated by introgressed genetic variation. © 2016 John Wiley & Sons Ltd.

  7. CORCON-MOD1 modelling improvements

    International Nuclear Information System (INIS)

    Corradini, M.L.; Gonzales, F.G.; Vandervort, C.L.


    Given the unlikely occurrence of a severe accident in a light water reactor (LWR), the core may melt and slump into the reactor cavity below the reactor vessel. The interaction of the molten core with exposed concrete (a molten-core-concrete-interaction, MCCI) causes copious gas production which influences further heat transfer and concrete attack and may threaten containment integrity. In this paper the authors focus on the low-temperature phase of the MCCI where the molten pool is partially solidified, but is still capable of attacking concrete. The authors have developed some improved phenomenological models for pool freezing and molten core-coolant heat transfer and have incorporated them into the CORCON-MOD1 computer program. In the paper the authors compare the UW-CORCON/MOD1 calculations to CORCON/MOD2 and WECHSL results as well as the BETA experiments which are being conducted in Germany

  8. Improvements to type Ia supernova models (United States)

    Saunders, Clare M.

    Type Ia Supernovae provided the first strong evidence of dark energy and are still an important tool for measuring the accelerated expansion of the universe. However, future improvements will be limited by systematic uncertainties in our use of Type Ia supernovae as standard candles. Using Type Ia supernovae for cosmology relies on our ability to standardize their absolute magnitudes, but this relies on imperfect models of supernova spectra time series. This thesis is focused on using data from the Nearby Supernova Factory both to understand current sources of uncertainty in standardizing Type Ia supernovae and to develop techniques that can be used to limit uncertainty in future analyses. (Abstract shortened by ProQuest.).

  9. A comprehensive track model for the improvement of corrugation models (United States)

    Gómez, J.; Vadillo, E. G.; Santamaría, J.


    This paper presents a detailed model of the railway track based on wave propagation, suitable for corrugation studies. The model analyses both the vertical and the transverse dynamics of the track. Using the finite strip method (FSM), only the cross-section of the rail must be meshed, and thus it is not necessary to discretise a whole span in 3D. This model takes into account the discrete nature of the support, introducing concepts pertaining to the theory of periodic structures in the formulation. Wave superposition is enriched taking into account the contribution of residual vectors. In this way, the model obtains accurate results when a finite section of railway track is considered. Results for the infinite track have been compared against those presented by Gry and Müller. Aside from the improvements provided by the model presented in this paper, which Gry's and Müller's models do not contemplate, the results arising from the comparison prove satisfactory. Finally, the calculated receptances are compared against the experimental values obtained by the authors, demonstrating a fair degree of adequacy. Finally, these receptances are used within a linear model of corrugation developed by the authors.

  10. Predictive Modeling by the Cerebellum Improves Proprioception (United States)

    Bhanpuri, Nasir H.; Okamura, Allison M.


    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance. PMID:24005283

  11. How can model comparison help improving species distribution models?

    Directory of Open Access Journals (Sweden)

    Emmanuel Stephan Gritti

    Full Text Available Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs. However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  12. Similarities and Improvements of GPM Dual-Frequency Precipitation Radar (DPR upon TRMM Precipitation Radar (PR in Global Precipitation Rate Estimation, Type Classification and Vertical Profiling

    Directory of Open Access Journals (Sweden)

    Jinyu Gao


    Full Text Available Spaceborne precipitation radars are powerful tools used to acquire adequate and high-quality precipitation estimates with high spatial resolution for a variety of applications in hydrological research. The Global Precipitation Measurement (GPM mission, which deployed the first spaceborne Ka- and Ku-dual frequency radar (DPR, was launched in February 2014 as the upgraded successor of the Tropical Rainfall Measuring Mission (TRMM. This study matches the swath data of TRMM PR and GPM DPR Level 2 products during their overlapping periods at the global scale to investigate their similarities and DPR’s improvements concerning precipitation amount estimation and type classification of GPM DPR over TRMM PR. Results show that PR and DPR agree very well with each other in the global distribution of precipitation, while DPR improves the detectability of precipitation events significantly, particularly for light precipitation. The occurrences of total precipitation and the light precipitation (rain rates < 1 mm/h detected by GPM DPR are ~1.7 and ~2.53 times more than that of PR. With regard to type classification, the dual-frequency (Ka/Ku and single frequency (Ku methods performed similarly. In both inner (the central 25 beams and outer swaths (1–12 beams and 38–49 beams of DPR, the results are consistent. GPM DPR improves precipitation type classification remarkably, reducing the misclassification of clouds and noise signals as precipitation type “other” from 10.14% of TRMM PR to 0.5%. Generally, GPM DPR exhibits the same type division for around 82.89% (71.02% of stratiform (convective precipitation events recognized by TRMM PR. With regard to the freezing level height and bright band (BB height, both radars correspond with each other very well, contributing to the consistency in stratiform precipitation classification. Both heights show clear latitudinal dependence. Results in this study shall contribute to future development of spaceborne

  13. Improved Pig Model to Evaluate Heart Valve Thrombosis. (United States)

    Payanam Ramachandra, Umashankar; Shenoy, Sachin J; Arumugham, Sabareeswaran


    Although the sheep is the most acceptable animal model for heart valve evaluation, it has severe limitations for detecting heart valve thrombosis during preclinical studies. While the pig offers an alternative model and is better for detecting prosthetic valve thrombogenicity, it is not often used because of inadvertent valve thrombosis or bleeding complications. The study aim was to develop an improved pig model which can be used reliably to evaluate mechanical heart valve thrombogenicity. Mechanical heart valves were implanted in the mitral position of indigenous pigs administered aspirin-clopidogrel, and compared with similar valves implanted in control pigs to which no antiplatelet therapy had been administered. The pigs were observed for six months to study their overall survivability, inadvertent bleeding/valve thrombosis and pannus formation. The efficacy of aspirinclopidogrel on platelet aggregation and blood coagulation was also recorded and compared between test and control animals. In comparison to controls, pigs receiving anti-platelet therapy showed an overall better survivability, an absence of inadvertent valve thrombosis/ bleeding, and less obstructive pannus formation. Previously unreported inhibitory effects of aspirin-clopidogrel on the intrinsic pathway of blood coagulation were also observed in the pig model. Notably, with aspirin-clopidogrel therapy inadvertent thrombus formation or bleeding can be prevented. The newly developed pig model can be successfully used to evaluate heart valve thrombosis following chronic orthotopic valve implantation. The model may also be utilized to evaluate other bloodcontacting implantable devices.

  14. A Training Model for Improving Journalists' Voice. (United States)

    Rodero, Emma; Diaz-Rodriguez, Celia; Larrea, Olatz


    Voice education is a crucial aspect for professionals (journalists, teachers, politicians, actors, etc.) who use their voices as a working tool. The main concerns about such education are that, first, there is little awareness of the importance of voice education, and second there is little research devoted to it. The consequences of this lack of training are indeed visible in professionals who suffer voice pathologies or work with little effectiveness. This study seeks to overcome this deficiency by proposing a training model tested with a control group and a pilot study. Speech samples from a group of experimental participants-journalism students-were collected before and after a training course designed to improve their main vocal and prosodic features. These samples were contrasted with a control group without training. Results indicated significant differences in all tested voice elements (breathing, articulation, loudness, pitch, jitter, speech rate, pauses, and stress) except for shimmer and harmonics. The participants were able to enhance their main vocal and prosodic elements, and therefore their expressiveness maintaining optimal vocal hygiene. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  15. Using sparse LU factorisation to precondition GMRES for a family of similarly structured matrices arising from process modelling

    Energy Technology Data Exchange (ETDEWEB)

    Brooking, C. [Univ. of Bath (United Kingdom)


    Process engineering software is used to simulate the operation of large chemical plants. Such simulations are used for a variety of tasks, including operator training. For the software to be of practical use for this, dynamic simulations need to run in real-time. The models that the simulation is based upon are written in terms of Differential Algebraic Equations (DAE`s). In the numerical time-integration of systems of DAE`s using an implicit method such as backward Euler, the solution of nonlinear systems is required at each integration point. When solved using Newton`s method, this leads to the repeated solution of nonsymmetric sparse linear systems. These systems range in size from 500 to 20,000 variables. A typical integration may require around 3000 timesteps, and if 4 Newton iterates were needed on each time step, then this means approximately 12,000 linear systems must be solved. The matrices produced by the simulations have a similar sparsity pattern throughout the integration. They are also severely ill-conditioned, and have widely-scattered spectra.

  16. The perfectionism model of binge eating: testing unique contributions, mediating mechanisms, and cross-cultural similarities using a daily diary methodology. (United States)

    Sherry, Simon B; Sabourin, Brigitte C; Hall, Peter A; Hewitt, Paul L; Flett, Gordon L; Gralnick, Tara M


    The perfectionism model of binge eating (PMOBE) is an integrative model explaining the link between perfectionism and binge eating. This model proposes socially prescribed perfectionism confers risk for binge eating by generating exposure to 4 putative binge triggers: interpersonal discrepancies, low interpersonal esteem, depressive affect, and dietary restraint. The present study addresses important gaps in knowledge by testing if these 4 binge triggers uniquely predict changes in binge eating on a daily basis and if daily variations in each binge trigger mediate the link between socially prescribed perfectionism and daily binge eating. Analyses also tested if proposed mediational models generalized across Asian and European Canadians. The PMOBE was tested in 566 undergraduate women using a 7-day daily diary methodology. Depressive affect predicted binge eating, whereas anxious affect did not. Each binge trigger uniquely contributed to binge eating on a daily basis. All binge triggers except for dietary restraint mediated the relationship between socially prescribed perfectionism and change in daily binge eating. Results suggested cross-cultural similarities, with the PMOBE applying to both Asian and European Canadian women. The present study advances understanding of the personality traits and the contextual conditions accompanying binge eating and provides an important step toward improving treatments for people suffering from eating binges and associated negative consequences.

  17. Improvements To Solar Radiation Pressure Modeling For Jason-2 (United States)

    Zelensky, N. P.; Lemoine, F. G.; Melachroinos, S.; Pavlis, D.; Bordyugov, O.


    Jason-2 is the follow-on to the Jason-1 and TOPEX/Poseidon radar altimetry missions observing the sea surface. The computed orbit is used to reference the altimeter measurement to the center of the Earth, and thus the accuracy and stability of the orbit are critical to the sea surface observation accuracy. A 1-cm Jason-2 radial orbit accuracy goal is required for meeting the 2.5 cm altimeter measurement goal. Also mean sea level change estimated from altimetry requires orbit stability to well below 1 mm/yr. Although 1-cm orbits have been achieved, unresolved large draconitic period error signatures remain and are believed to be due to mis-modeling of the solar radiation pressure (SRP) forces acting on the satellite. Such error may easily affect the altimeter data, and can alias into any number of estimated geodetic quantities using Jason-2. Precision orbit determination (POD) at GSFC and other analysis centers employs an 8-panel "macromodel" representation of the satellite geometry and optical properties to model SRP. Telemetered attitude and modeled solar array pitch angles (SAPA) are used to orient the macromodel. Several possible improvements to SRP modeling are evaluated and include: 1) using telemetered SAPA values, 2) using the SRP model developed at UCL for the very similar Jason-1, 3) re-tuning the macromodel, 4) modifying POD strategy to estimate a coefficient of reflectivity (CR) for every arc, or else using the reduced-dynamic approach. Improvements to POD modeling are evaluated through analysis of tracking data residuals, estimated empirical accelerations, and orbit differences.

  18. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling. (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H


    The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. An Improved SPH Technique for Fracture Modeling

    National Research Council Canada - National Science Library

    Libersky, Larry


    .... With these improvements, the MAGI code could solve the enormously complex problem of simulating Behind-Armor-Debris and subsequent interaction of the spall cloud with threat target components as well...

  20. Twelve Weeks of Sprint Interval Training Improves Indices of Cardiometabolic Health Similar to Traditional Endurance Training despite a Five-Fold Lower Exercise Volume and Time Commitment.

    Directory of Open Access Journals (Sweden)

    Jenna B Gillen

    Full Text Available We investigated whether sprint interval training (SIT was a time-efficient exercise strategy to improve insulin sensitivity and other indices of cardiometabolic health to the same extent as traditional moderate-intensity continuous training (MICT. SIT involved 1 minute of intense exercise within a 10-minute time commitment, whereas MICT involved 50 minutes of continuous exercise per session.Sedentary men (27±8y; BMI = 26±6kg/m2 performed three weekly sessions of SIT (n = 9 or MICT (n = 10 for 12 weeks or served as non-training controls (n = 6. SIT involved 3x20-second 'all-out' cycle sprints (~500W interspersed with 2 minutes of cycling at 50W, whereas MICT involved 45 minutes of continuous cycling at ~70% maximal heart rate (~110W. Both protocols involved a 2-minute warm-up and 3-minute cool-down at 50W.Peak oxygen uptake increased after training by 19% in both groups (SIT: 32±7 to 38±8; MICT: 34±6 to 40±8ml/kg/min; p<0.001 for both. Insulin sensitivity index (CSI, determined by intravenous glucose tolerance tests performed before and 72 hours after training, increased similarly after SIT (4.9±2.5 to 7.5±4.7, p = 0.002 and MICT (5.0±3.3 to 6.7±5.0 x 10-4 min-1 [μU/mL]-1, p = 0.013 (p<0.05. Skeletal muscle mitochondrial content also increased similarly after SIT and MICT, as primarily reflected by the maximal activity of citrate synthase (CS; P<0.001. The corresponding changes in the control group were small for VO2peak (p = 0.99, CSI (p = 0.63 and CS (p = 0.97.Twelve weeks of brief intense interval exercise improved indices of cardiometabolic health to the same extent as traditional endurance training in sedentary men, despite a five-fold lower exercise volume and time commitment.

  1. Twelve Weeks of Sprint Interval Training Improves Indices of Cardiometabolic Health Similar to Traditional Endurance Training despite a Five-Fold Lower Exercise Volume and Time Commitment (United States)

    Martin, Brian J.; MacInnis, Martin J.; Skelly, Lauren E.; Tarnopolsky, Mark A.; Gibala, Martin J.


    Aims We investigated whether sprint interval training (SIT) was a time-efficient exercise strategy to improve insulin sensitivity and other indices of cardiometabolic health to the same extent as traditional moderate-intensity continuous training (MICT). SIT involved 1 minute of intense exercise within a 10-minute time commitment, whereas MICT involved 50 minutes of continuous exercise per session. Methods Sedentary men (27±8y; BMI = 26±6kg/m2) performed three weekly sessions of SIT (n = 9) or MICT (n = 10) for 12 weeks or served as non-training controls (n = 6). SIT involved 3x20-second ‘all-out’ cycle sprints (~500W) interspersed with 2 minutes of cycling at 50W, whereas MICT involved 45 minutes of continuous cycling at ~70% maximal heart rate (~110W). Both protocols involved a 2-minute warm-up and 3-minute cool-down at 50W. Results Peak oxygen uptake increased after training by 19% in both groups (SIT: 32±7 to 38±8; MICT: 34±6 to 40±8ml/kg/min; p<0.001 for both). Insulin sensitivity index (CSI), determined by intravenous glucose tolerance tests performed before and 72 hours after training, increased similarly after SIT (4.9±2.5 to 7.5±4.7, p = 0.002) and MICT (5.0±3.3 to 6.7±5.0 x 10−4 min-1 [μU/mL]-1, p = 0.013) (p<0.05). Skeletal muscle mitochondrial content also increased similarly after SIT and MICT, as primarily reflected by the maximal activity of citrate synthase (CS; P<0.001). The corresponding changes in the control group were small for VO2peak (p = 0.99), CSI (p = 0.63) and CS (p = 0.97). Conclusions Twelve weeks of brief intense interval exercise improved indices of cardiometabolic health to the same extent as traditional endurance training in sedentary men, despite a five-fold lower exercise volume and time commitment. PMID:27115137

  2. Running performance in the heat is improved by similar magnitude with pre-exercise cold-water immersion and mid-exercise facial water spray. (United States)

    Stevens, Christopher J; Kittel, Aden; Sculley, Dean V; Callister, Robin; Taylor, Lee; Dascombe, Ben J


    This investigation compared the effects of external pre-cooling and mid-exercise cooling methods on running time trial performance and associated physiological responses. Nine trained male runners completed familiarisation and three randomised 5 km running time trials on a non-motorised treadmill in the heat (33°C). The trials included pre-cooling by cold-water immersion (CWI), mid-exercise cooling by intermittent facial water spray (SPRAY), and a control of no cooling (CON). Temperature, cardiorespiratory, muscular activation, and perceptual responses were measured as well as blood concentrations of lactate and prolactin. Performance time was significantly faster with CWI (24.5 ± 2.8 min; P = 0.01) and SPRAY (24.6 ± 3.3 min; P = 0.01) compared to CON (25.2 ± 3.2 min). Both cooling strategies significantly (P < 0.05) reduced forehead temperatures and thermal sensation, and increased muscle activation. Only pre-cooling significantly lowered rectal temperature both pre-exercise (by 0.5 ± 0.3°C; P < 0.01) and throughout exercise, and reduced sweat rate (P < 0.05). Both cooling strategies improved performance by a similar magnitude, and are ergogenic for athletes. The observed physiological changes suggest some involvement of central and psychophysiological mechanisms of performance improvement.

  3. Improved hidden Markov model for nosocomial infections. (United States)

    Khader, Karim; Leecaster, Molly; Greene, Tom; Samore, Matthew; Thomas, Alun


    We propose a novel hidden Markov model (HMM) for parameter estimation in hospital transmission models, and show that commonly made simplifying assumptions can lead to severe model misspecification and poor parameter estimates. A standard HMM that embodies two commonly made simplifying assumptions, namely a fixed patient count and binomially distributed detections is compared with a new alternative HMM that does not require these simplifying assumptions. Using simulated data, we demonstrate how each of the simplifying assumptions used by the standard model leads to model misspecification, whereas the alternative model results in accurate parameter estimates. © The Authors 2013. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  4. Improvements to a nonequilibrium algebraic turbulence model (United States)

    Johnson, D. A.; Coakley, T. J.


    It has been noted that while the nonequilibrium turbulence model of Johnson and King (1985, 1987) performed significantly better than alternative methods, differences between predicted and observed shock locations for certain weak interactions are produced due to a defficiency in the model's inner eddy viscosity formulation. A novel formulation for the model is presented which removes this deficiency, while satisfying the law of the wall for adverse pressure-gradient conditions better than either the original formulation or mixing-length theory.

  5. Modeling the kinetics of hydrates formation using phase field method under similar conditions of petroleum pipelines; Modelagem da cinetica de formacao de hidratos utilizando o Modelo do Campo de Fase em condicoes similares a dutos de petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mabelle Biancardi; Castro, Jose Adilson de; Silva, Alexandre Jose da [Universidade Federal Fluminense (UFF), Volta Redonda, RJ (Brazil). Programa de Pos-Graduacao em Engenharia Metalurgica], e-mails:;;


    Natural hydrates are crystalline compounds that are ice-like formed under oil extraction transportation and processing. This paper deals with the kinetics of hydrate formation by using the phase field approach coupled with the transport equation of energy. The kinetic parameters of the hydrate formation were obtained by adjusting the proposed model to experimental results in similar conditions of oil extraction. The effect of thermal and nucleation conditions were investigated while the rate of formation and morphology were obtained by numerical computation. Model results of kinetics growth and morphology presented good agreement with the experimental ones. Simulation results indicated that super-cooling and pressure were decisive parameters for hydrates growth, morphology and interface thickness. (author)

  6. Improving the physiological realism of experimental models

    NARCIS (Netherlands)

    Vinnakota, Kalyan C.; Cha, Chae Y.; Rorsman, Patrik; Balaban, Robert S.; La Gerche, Andre; Wade-Martins, Richard; Beard, Daniel A.; Jeneson, Jeroen A. L.

    The Virtual Physiological Human (VPH) project aims to develop integrative, explanatory and predictive computational models (C-Models) as numerical investigational tools to study disease, identify and design effective therapies and provide an in silico platform for drug screening. Ultimately, these

  7. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.


    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  8. A Model to Improve the Quality Products

    Directory of Open Access Journals (Sweden)

    Hasan GOKKAYA


    Full Text Available The topic of this paper is to present a solution who can improve product qualityfollowing the idea: “Unlike people who have verbal skills, machines use "sign language"to communicate what hurts or what has invaded their system’. Recognizing the "signs"or symptoms that the machine conveys is a required skill for those who work withmachines and are responsible for their care and feeding. The acoustic behavior of technical products is predominantly defined in the design stage, although the acoustic characteristics of machine structures can be analyze and give a solution for the actual products and create a new generation of products. The paper describes the steps intechnological process for a product and the solution who will reduce the costs with the non-quality of product and improve the management quality.

  9. Improved Model of a Mercury Ring Damper (United States)

    Fahrenthold, Eric P.; Shivarma, Ravishankar


    A short document discusses the general problem of mathematical modeling of the three-dimensional rotational dynamics of rigid bodies and of the use of Euler parameters to eliminate the singularities occasioned by the use of Euler angles in such modeling. The document goes on to characterize a Hamiltonian model, developed by the authors, that utilizes the Euler parameters and, hence, is suitable for use in computational simulations that involve arbitrary rotational motion. In this formulation unlike in prior Euler-parameter-based formulations, there are no algebraic constraints. This formulation includes a general potential energy function, incorporates a minimum set of momentum variables, and takes an explicit state-space form convenient for numerical implementation. Practical application of this formulation has been demonstrated by the development of a new and simplified model of the rotational motion of a rigid rotor to which is attached a partially filled mercury ring damper. Models like this one are used in guidance and control of spin-stabilized spacecraft and gyroscope-stabilized seekers in guided missiles.

  10. Improving Expression Power in Modeling OLAP Hierarchies (United States)

    Malinowski, Elzbieta

    Data warehouses and OLAP systems form an integral part of modern decision support systems. In order to exploit both systems to their full capabilities hierarchies must be clearly defined. Hierarchies are important in analytical applications, since they provide users with the possibility to represent data at different abstraction levels. However, even though there are different kinds of hierarchies in real-world applications and some are already implemented in commercial tools, there is still a lack of a well-accepted conceptual model that allows decision-making users express their analysis needs. In this paper, we show how the conceptual multidimensional model can be used to facilitate the representation of complex hierarchies in comparison to their representation in the relational model and commercial OLAP tool, using as an example Microsoft Analysis Services.

  11. Improved Maximum Parsimony Models for Phylogenetic Networks. (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine


    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  12. Matter of Similarity and Dissimilarity in Multi-Ethnic Society: A Model of Dyadic Cultural Norms Congruence


    Abu Bakar Hassan; Mohamad Bahtiar


    Taking this into consideration of diver cultural norms in Malaysian workplace, the propose model explores Malaysia culture and identity with a backdrop of the pushes and pulls of ethnic diversity in a Malaysia. The model seeks to understand relational norm congruence based on multiethnic in Malaysia that will be enable us to identify Malaysia cultural and identity. This is in line with recent call by various interest groups in Malaysia to focus more on model designs that capture contextual an...

  13. An improved Burgers cellular automaton model for bicycle flow (United States)

    Xue, Shuqi; Jia, Bin; Jiang, Rui; Li, Xingang; Shan, Jingjing


    As an energy-efficient and healthy transport mode, bicycling has recently attracted the attention of governments, transport planners, and researchers. The dynamic characteristics of the bicycle flow must be investigated to improve the facility design and traffic operation of bicycling. We model the bicycle flow by using an improved Burgers cellular automaton model. Through a following move mechanism, the modified model enables bicycles to move smoothly and increase the critical density to a more rational level than the original model. The model is calibrated and validated by using experimental data and field data. The results show that the improved model can effectively simulate the bicycle flow. The performance of the model under different parameters is investigated and discussed. Strengths and limitations of the improved model are suggested for future work.

  14. General Equilibrium Models: Improving the Microeconomics Classroom (United States)

    Nicholson, Walter; Westhoff, Frank


    General equilibrium models now play important roles in many fields of economics including tax policy, environmental regulation, international trade, and economic development. The intermediate microeconomics classroom has not kept pace with these trends, however. Microeconomics textbooks primarily focus on the insights that can be drawn from the…

  15. Hybrid Modeling Improves Health and Performance Monitoring (United States)


    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  16. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.


    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at

  17. Soil hydraulic properties near saturation, an improved conductivity model

    DEFF Research Database (Denmark)

    Børgesen, Christen Duus; Jacobsen, Ole Hørbye; Hansen, Søren


    of commonly used hydraulic conductivity models and give suggestions for improved models. Water retention and near saturated and saturated hydraulic conductivity were measured for a variety of 81 top and subsoils. The hydraulic conductivity models by van Genuchten [van Genuchten, 1980. A closed-form equation....... Reports and Dissertations 9.] were optimised to describe the unsaturated hydraulic conductivity in the range measured. Different optimisation procedures were tested. Using the measured saturated hydraulic conductivity in the vGM model tends to overestimate the unsaturated hydraulic conductivity....... Optimising a matching factor (k0) improved the fit considerably whereas optimising the l-parameter in the vGM model improved the fit only slightly. The vGM was improved with an empirical scaling function to account for the rapid increase in conductivity near saturation. Using the improved models...

  18. Bootstrap model selection had similar performance for selecting authentic and noise variables compared to backward variable elimination: a simulation study. (United States)

    Austin, Peter C


    Researchers have proposed using bootstrap resampling in conjunction with automated variable selection methods to identify predictors of an outcome and to develop parsimonious regression models. Using this method, multiple bootstrap samples are drawn from the original data set. Traditional backward variable elimination is used in each bootstrap sample, and the proportion of bootstrap samples in which each candidate variable is identified as an independent predictor of the outcome is determined. The performance of this method for identifying predictor variables has not been examined. Monte Carlo simulation methods were used to determine the ability of bootstrap model selection methods to correctly identify predictors of an outcome when those variables that are selected for inclusion in at least 50% of the bootstrap samples are included in the final regression model. We compared the performance of the bootstrap model selection method to that of conventional backward variable elimination. Bootstrap model selection tended to result in an approximately equal proportion of selected models being equal to the true regression model compared with the use of conventional backward variable elimination. Bootstrap model selection performed comparatively to backward variable elimination for identifying the true predictors of a binary outcome.

  19. Improvement of core degradation model in ISAAC

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Ha; Kim, See Darl; Park, Soo Yong


    If water inventory in the fuel channels depletes and fuel rods are exposed to steam after uncover in the pressure tube, the decay heat generated from fuel rods is transferred to the pressure tube and to the calandria tube by radiation, and finally to the moderator in the calandria tank by conduction. During this process, the cladding will be heated first and ballooned when the fuel gap internal pressure exceeds the primary system pressure. The pressure tube will be also ballooned and will touch the calandria tube, increasing heat transfer rate to the moderator. Although these situation is not desirable, the fuel channel is expected to maintain its integrity as long as the calandria tube is submerged in the moderator, because the decay heat could be removed to the moderator through radiation and conduction. Therefore, loss of coolant and moderator inside and outside the channel may cause severe core damage including horizontal fuel channel sagging and finally loss of channel integrity. The sagged channels contact with the channels located below and lose their heat transfer area to the moderator. As the accident goes further, the disintegrated fuel channels will be heated up and relocated onto the bottom of the calandria tank. If the temperature of these relocated materials is high enough to attack the calandria tank, the calandria tank would fail and molten material would contact with the calandria vault water. Steam explosion and/or rapid steam generation from this interaction may threaten containment integrity. Though a detailed model is required to simulate the severe accident at CANDU plants, complexity of phenomena itself and inner structures as well as lack of experimental data forces to choose a simple but reasonable model as the first step. ISAAC 1.0 was developed to model the basic physicochemical phenomena during the severe accident progression. At present, ISAAC 2.0 is being developed for accident management guide development and strategy evaluation. In

  20. Improvement of core degradation model in ISAAC

    International Nuclear Information System (INIS)

    Kim, Dong Ha; Kim, See Darl; Park, Soo Yong


    If water inventory in the fuel channels depletes and fuel rods are exposed to steam after uncover in the pressure tube, the decay heat generated from fuel rods is transferred to the pressure tube and to the calandria tube by radiation, and finally to the moderator in the calandria tank by conduction. During this process, the cladding will be heated first and ballooned when the fuel gap internal pressure exceeds the primary system pressure. The pressure tube will be also ballooned and will touch the calandria tube, increasing heat transfer rate to the moderator. Although these situation is not desirable, the fuel channel is expected to maintain its integrity as long as the calandria tube is submerged in the moderator, because the decay heat could be removed to the moderator through radiation and conduction. Therefore, loss of coolant and moderator inside and outside the channel may cause severe core damage including horizontal fuel channel sagging and finally loss of channel integrity. The sagged channels contact with the channels located below and lose their heat transfer area to the moderator. As the accident goes further, the disintegrated fuel channels will be heated up and relocated onto the bottom of the calandria tank. If the temperature of these relocated materials is high enough to attack the calandria tank, the calandria tank would fail and molten material would contact with the calandria vault water. Steam explosion and/or rapid steam generation from this interaction may threaten containment integrity. Though a detailed model is required to simulate the severe accident at CANDU plants, complexity of phenomena itself and inner structures as well as lack of experimental data forces to choose a simple but reasonable model as the first step. ISAAC 1.0 was developed to model the basic physicochemical phenomena during the severe accident progression. At present, ISAAC 2.0 is being developed for accident management guide development and strategy evaluation. In

  1. Improving Flood Damage Assessment Models in Italy (United States)

    Amadio, M.; Mysiak, J.; Carrera, L.; Koks, E.


    The use of Stage-Damage Curve (SDC) models is prevalent in ex-ante assessments of flood risk. To assess the potential damage of a flood event, SDCs describe a relation between water depth and the associated potential economic damage over land use. This relation is normally developed and calibrated through site-specific analysis based on ex-post damage observations. In some cases (e.g. Italy) SDCs are transferred from other countries, undermining the accuracy and reliability of simulation results. Against this background, we developed a refined SDC model for Northern Italy, underpinned by damage compensation records from a recent flood event. Our analysis considers both damage to physical assets and production losses from business interruptions. While the first is calculated based on land use information, production losses are measured through the spatial distribution of Gross Value Added (GVA). An additional component of the model assesses crop-specific agricultural losses as a function of flood seasonality. Our results show an overestimation of asset damage from non-calibrated SDC values up to a factor of 4.5 for tested land use categories. Furthermore, we estimate that production losses amount to around 6 per cent of the annual GVA. Also, maximum yield losses are less than a half of the amount predicted by the standard SDC methods.

  2. Matter of Similarity and Dissimilarity in Multi-Ethnic Society: A Model of Dyadic Cultural Norms Congruence

    Directory of Open Access Journals (Sweden)

    Abu Bakar Hassan


    Full Text Available Taking this into consideration of diver cultural norms in Malaysian workplace, the propose model explores Malaysia culture and identity with a backdrop of the pushes and pulls of ethnic diversity in a Malaysia. The model seeks to understand relational norm congruence based on multiethnic in Malaysia that will be enable us to identify Malaysia cultural and identity. This is in line with recent call by various interest groups in Malaysia to focus more on model designs that capture contextual and cultural factors that influences Malaysia culture and identity.

  3. Capability Maturity Model (CMM) for Software Process Improvements (United States)

    Ling, Robert Y.


    This slide presentation reviews the Avionic Systems Division's implementation of the Capability Maturity Model (CMM) for improvements in the software development process. The presentation reviews the process involved in implementing the model and the benefits of using CMM to improve the software development process.

  4. Modeling and improving Ethiopian pasture systems. (United States)

    Parisi, S G; Cola, G; Gilioli, G; Mariani, L


    The production of pasture in Ethiopia was simulated by means of a dynamic model. Most of the country is characterized by a tropical monsoon climate with mild temperatures and precipitation mainly concentrated in the June-September period (main rainy season). The production model is driven by solar radiation and takes into account limitations due to relocation, maintenance respiration, conversion to final dry matter, temperature, water stress, and nutrients availability. The model also considers the senescence of grassland which strongly limits the nutritional value of grasses for livestock. The simulation for the 1982-2009 period, performed on gridded daily time series of rainfall and maximum and minimum temperature with a resolution of 0.5°, provided results comparable with values reported in literature. Yearly mean yield in Ethiopia ranged between 1.8 metric ton per hectare (t ha -1 ) (2002) and 2.6 t ha -1 (1989) of dry matter with values above 2.5 t ha -1 attained in 1983, 1985, 1989, and 2008. The Ethiopian territory has been subdivided in 1494 cells and a frequency distribution of the per-cell yearly mean pasture production has been obtained. This distribution ranges from 0 to 7 t ha -1 and it shows a right skewed distribution and a modal class between 1.5-2 t ha -1 . Simulation carried out on long time series for this peculiar tropical environment give rise to as lot of results relevant by the agroecological point of view on space variability of pasture production, main limiting factors (solar radiation, precipitation, temperature), and relevant meteo-climatic cycles affecting pasture production (seasonal and inter yearly variability, ENSO). These results are useful to establish an agro-ecological zoning of the Ethiopian territory.


    Directory of Open Access Journals (Sweden)

    Dusko Pavletic


    Full Text Available The paper expresses base for an operational quality improvement model at the manufacturing process preparation level. A numerous appropriate related quality assurance and improvement methods and tools are identified. Main manufacturing process principles are investigated in order to scrutinize one general model of manufacturing process and to define a manufacturing process preparation level. Development and introduction of the operational quality improvement model is based on a research conducted and results of methods and tools application possibilities in real manufacturing processes shipbuilding and automotive industry. Basic model structure is described and presented by appropriate general algorithm. Operational quality improvement model developed lays down main guidelines for practical and systematic application of quality improvements methods and tools.

  6. Improved CHAID algorithm for document structure modelling (United States)

    Belaïd, A.; Moinel, T.; Rangoni, Y.


    This paper proposes a technique for the logical labelling of document images. It makes use of a decision-tree based approach to learn and then recognise the logical elements of a page. A state-of-the-art OCR gives the physical features needed by the system. Each block of text is extracted during the layout analysis and raw physical features are collected and stored in the ALTO format. The data-mining method employed here is the "Improved CHi-squared Automatic Interaction Detection" (I-CHAID). The contribution of this work is the insertion of logical rules extracted from the logical layout knowledge to support the decision tree. Two setups have been tested; the first uses one tree per logical element, the second one uses a single tree for all the logical elements we want to recognise. The main system, implemented in Java, coordinates the third-party tools (Omnipage for the OCR part, and SIPINA for the I-CHAID algorithm) using XML and XSL transforms. It was tested on around 1000 documents belonging to the ICPR'04 and ICPR'08 conference proceedings, representing about 16,000 blocks. The final error rate for determining the logical labels (among 9 different ones) is less than 6%.

  7. Estuarine modeling: Does a higher grid resolution improve model performance? (United States)

    Ecological models are useful tools to explore cause effect relationships, test hypothesis and perform management scenarios. A mathematical model, the Gulf of Mexico Dissolved Oxygen Model (GoMDOM), has been developed and applied to the Louisiana continental shelf of the northern ...

  8. Pion and an improved static bag model

    Energy Technology Data Exchange (ETDEWEB)

    Donoghue, J.F.; Johnson, K.


    Quark-model calculations involve an extended static object localized in space. We introduce new methods, involving momentum-space wave packets, which account for this localization. These methods have little effect on heavy states, whose sizes are large compared to their Compton size 1/m, but are very important for light particles such as the pion. In this treatment the pion's mass is naturally very small, and, in order to connect with a spontaneously broken chiral symmetry, we require that m/sub ..pi../ vanish when the light quarks are massless. Expanding about this limit (and also readjusting the fit to other hadrons), we obtain m/sub q/=(m-italic/sub u/+m/sub d/)/2=33 MeV. We calculate F/sub ..pi../ approx. = 145 MeV (using a normalization such that F/sub ..pi../ vertical-bar /sub exp/=93 MeV), F/sub K//F/sub ..pi../ approx. = 1, and various corrections to static properties of baryons. In addition we explore the relationship of our methods with chiral perturbation theory, deriving the formula m/sub ..pi../ /sup 2/=(m-italic/sub u/+m/sub d/) < ..pi..(p) vertical-bar q-bar(0)q(0) vertical-bar ..pi..(p) > in the appropriate approximation and commenting on the quark mass obtained from the nucleon's sigma term. Finally we discuss the bag model's use of the scalar density q-barq as an order parameter describing the separation of the spontaneously broken vacuum phase from the perturbative vacuum of the bag's interior.

  9. Advanced Model for Extreme Lift and Improved Aeroacoustics (AMELIA) (United States)

    Lichtwardt, Jonathan; Paciano, Eric; Jameson, Tina; Fong, Robert; Marshall, David


    With the very recent advent of NASA's Environmentally Responsible Aviation Project (ERA), which is dedicated to designing aircraft that will reduce the impact of aviation on the environment, there is a need for research and development of methodologies to minimize fuel burn, emissions, and reduce community noise produced by regional airliners. ERA tackles airframe technology, propulsion technology, and vehicle systems integration to meet performance objectives in the time frame for the aircraft to be at a Technology Readiness Level (TRL) of 4-6 by the year of 2020 (deemed N+2). The proceeding project that investigated similar goals to ERA was NASA's Subsonic Fixed Wing (SFW). SFW focused on conducting research to improve prediction methods and technologies that will produce lower noise, lower emissions, and higher performing subsonic aircraft for the Next Generation Air Transportation System. The work provided in this investigation was a NASA Research Announcement (NRA) contract #NNL07AA55C funded by Subsonic Fixed Wing. The project started in 2007 with a specific goal of conducting a large-scale wind tunnel test along with the development of new and improved predictive codes for the advanced powered-lift concepts. Many of the predictive codes were incorporated to refine the wind tunnel model outer mold line design. The large scale wind tunnel test goal was to investigate powered lift technologies and provide an experimental database to validate current and future modeling techniques. Powered-lift concepts investigated were Circulation Control (CC) wing in conjunction with over-the-wing mounted engines to entrain the exhaust to further increase the lift generated by CC technologies alone. The NRA was a five-year effort; during the first year the objective was to select and refine CESTOL concepts and then to complete a preliminary design of a large-scale wind tunnel model for the large scale test. During the second, third, and fourth years the large-scale wind

  10. Self-similar decay to the marginally stable ground state in a model for film flow over inclined wavy bottoms

    Directory of Open Access Journals (Sweden)

    Tobias Hacker


    Full Text Available The integral boundary layer system (IBL with spatially periodic coefficients arises as a long wave approximation for the flow of a viscous incompressible fluid down a wavy inclined plane. The Nusselt-like stationary solution of the IBL is linearly at best marginally stable; i.e., it has essential spectrum at least up to the imaginary axis. Nevertheless, in this stable case we show that localized perturbations of the ground state decay in a self-similar way. The proof uses the renormalization group method in Bloch variables and the fact that in the stable case the Burgers equation is the amplitude equation for long waves of small amplitude in the IBL. It is the first time that such a proof is given for a quasilinear PDE with spatially periodic coefficients.

  11. Accelerating quality improvement within your organization: Applying the Model for Improvement. (United States)

    Crowl, Ashley; Sharma, Anita; Sorge, Lindsay; Sorensen, Todd


    To discuss the fundamentals of the Model for Improvement and how the model can be applied to quality improvement activities associated with medication use, including understanding the three essential questions that guide quality improvement, applying a process for actively testing change within an organization, and measuring the success of these changes on care delivery. PubMed from 1990 through April 2014 using the search terms quality improvement, process improvement, hospitals, and primary care. At the authors' discretion, studies were selected based on their relevance in demonstrating the quality improvement process and tests of change within an organization. Organizations are continuously seeking to enhance quality in patient care services, and much of this work focuses on improving care delivery processes. Yet change in these systems is often slow, which can lead to frustration or apathy among frontline practitioners. Adopting and applying the Model for Improvement as a core strategy for quality improvement efforts can accelerate the process. While the model is frequently well known in hospitals and primary care settings, it is not always familiar to pharmacists. In addition, while some organizations may be familiar with the "plan, do, study, act" (PDSA) cycles-one element of the Model for Improvement-many do not apply it effectively. The goal of the model is to combine a continuous process of small tests of change (PDSA cycles) within an overarching aim with a longitudinal measurement process. This process differs from other forms of improvement work that plan and implement large-scale change over an extended period, followed by months of data collection. In this scenario it may take months or years to determine whether an intervention will have a positive impact. By following the Model for Improvement, frontline practitioners and their organizational leaders quickly identify strategies that make a positive difference and result in a greater degree of

  12. The Effect of a Model's HIV Status on Self-Perceptions: A Self-Protective Similarity Bias. (United States)

    Gump, Brooks B.; Kulik, James A.


    Examined how information about another person's HIV status influences self-perceptions and behavioral intentions. Individuals perceived their own personalities and behaviors as more dissimilar to anther's if that person's HIV status was believed positive compared with negative or unknown. Exposure to HIV-positive model produced greater intentions…

  13. A Model to Explain At-Risk/Problem Gambling among Male and Female Adolescents: Gender Similarities and Differences (United States)

    Donati, Maria Anna; Chiesi, Francesca; Primi, Caterina


    This study aimed at testing a model in which cognitive, dispositional, and social factors were integrated into a single perspective as predictors of gambling behavior. We also aimed at providing further evidence of gender differences related to adolescent gambling. Participants were 994 Italian adolescents (64% Males; Mean age = 16.57).…

  14. Contrasting weight changes with LY2605541, a novel long-acting insulin, and insulin glargine despite similar improved glycaemic control in T1DM and T2DM. (United States)

    Jacober, S J; Rosenstock, J; Bergenstal, R M; Prince, M J; Qu, Y; Beals, J M


    The basal insulin analogue LY2605541, a PEGylated insulin lispro with prolonged duration of action, was previously shown to be associated with modest weight loss in Phase 2, randomized, open-label trials in type 2 (N=288) and type 1 (N=137) diabetes mellitus (T2DM and T1DM), compared with modest weight gain with insulin glargine. Exploratory analyses were conducted to further characterize these findings. Pearson correlations between change in body weight and other variables were calculated. Continuous variables were analysed using a mixed linear model with repeated measurements. Proportions of subjects with weight loss were analysed using Fisher's exact test for T2DM and Nagelkerke's method for T1DM. Weight loss was more common in LY2605541-treated patients than in patients treated with insulin glargine (T2DM: 56.9 vs. 40.2%, p=0.011; T1DM: 66.1 vs. 40.3%, pT2DM: 4.8 vs. 0%, p=0.033; T1DM: 11.9 vs. 0.8%, pT2DM studies, weight change did not correlate with baseline body mass index (BMI), or change in HDL-cholesterol in either treatment group. No consistent correlations were found across both studies between weight change and any of the variables assessed; however, weight change was significantly correlated with hypoglycaemia rate in glargine-treated T2DM patients. In two Phase 2 trials, improved glycaemic control with long-acting basal insulin analogue LY2605541 is associated with weight loss in previously insulin-treated patients. This weight change is independent of baseline BMI or hypoglycaemia.

  15. HCV kinetic and modeling analyses indicate similar time to cure among sofosbuvir combination regimens with daclatasvir, simeprevir or ledipasvir. (United States)

    Dahari, Harel; Canini, Laetitia; Graw, Frederik; Uprichard, Susan L; Araújo, Evaldo S A; Penaranda, Guillaume; Coquet, Emilie; Chiche, Laurent; Riso, Aurelie; Renou, Christophe; Bourliere, Marc; Cotler, Scott J; Halfon, Philippe


    Recent clinical trials of direct-acting-antiviral agents (DAAs) against hepatitis C virus (HCV) achieved >90% sustained virological response (SVR) rates, suggesting that cure often took place before the end of treatment (EOT). We sought to evaluate retrospectively whether early response kinetics can provide the basis to individualize therapy to achieve optimal results while reducing duration and cost. 58 chronic HCV patients were treated with 12-week sofosbuvir+simeprevir (n=19), sofosbuvir+daclatasvir (n=19), or sofosbuvir+ledipasvir in three French referral centers. HCV was measured at baseline, day 2, every other week, EOT and 12weeks post EOT. Mathematical modeling was used to predict the time to cure, i.e., <1 virus copy in the entire extracellular body fluid. All but one patient who relapsed achieved SVR. Mean age was 60±11years, 53% were male, 86% HCV genotype-1, 9% HIV coinfected, 43% advanced fibrosis (F3), and 57% had cirrhosis. At weeks 2, 4 and 6, 48%, 88% and 100% of patients had HCV<15IU/ml, with 27%, 74% and 91% of observations having target not detected, respectively. Modeling results predicted that 23 (43%), 16 (30%), 7 (13%), 5 (9%) and 3 (5%) subjects were predicted to reach cure within 6, 8, 10, 12 and 13weeks of therapy, respectively. The modeling suggested that the patient who relapsed would have benefitted from an additional week of sofosbuvir+ledipasvir. Adjusting duration of treatment according to the modeling predicts reduced medication costs of 43-45% and 17-30% in subjects who had HCV<15IU/ml at weeks 2 and 4, respectively. The use of early viral kinetic analysis has the potential to individualize duration of DAA therapy with a projected average cost saving of 16-20% per 100-treated persons. Copyright © 2016 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  16. Statistical Similarities Between WSA-ENLIL+Cone Model and MAVEN in Situ Observations From November 2014 to March 2016 (United States)

    Lentz, C. L.; Baker, D. N.; Jaynes, A. N.; Dewey, R. M.; Lee, C. O.; Halekas, J. S.; Brain, D. A.


    Normal solar wind flows and intense solar transient events interact directly with the upper Martian atmosphere due to the absence of an intrinsic global planetary magnetic field. Since the launch of the Mars Atmosphere and Volatile EvolutioN (MAVEN) mission, there are now new means to directly observe solar wind parameters at the planet's orbital location for limited time spans. Due to MAVEN's highly elliptical orbit, in situ measurements cannot be taken while MAVEN is inside Mars' magnetosheath. To model solar wind conditions during these atmospheric and magnetospheric passages, this research project utilized the solar wind forecasting capabilities of the WSA-ENLIL+Cone model. The model was used to simulate solar wind parameters that included magnetic field magnitude, plasma particle density, dynamic pressure, proton temperature, and velocity during a four Carrington rotation-long segment. An additional simulation that lasted 18 Carrington rotations was then conducted. The precision of each simulation was examined for intervals when MAVEN was in the upstream solar wind, that is, with no exospheric or magnetospheric phenomena altering in situ measurements. It was determined that generalized, extensive simulations have comparable prediction capabilities as shorter, more comprehensive simulations. Generally, this study aimed to quantify the loss of detail in long-term simulations and to determine if extended simulations can provide accurate, continuous upstream solar wind conditions when there is a lack of in situ measurements.

  17. Bayesian Data Assimilation for Improved Modeling of Road Traffic

    NARCIS (Netherlands)

    Van Hinsbergen, C.P.Y.


    This thesis deals with the optimal use of existing models that predict certain phenomena of the road traffic system. Such models are extensively used in Advanced Traffic Information Systems (ATIS), Dynamic Traffic Management (DTM) or Model Predictive Control (MPC) approaches in order to improve the

  18. Similar response in male and female B10.RIII mice in a murine model of allergic airway inflammation

    DEFF Research Database (Denmark)

    Matheu, Victor; Barrios, Ysamar; Arnau, Maria-Rosa


    BACKGROUND: Several reports have been published on the gender differences associated with allergies in mice. GOAL: In the present study we investigate the influence of gender on allergy response using a strain of mice, B10.RIII, which is commonly used in the collagen-induced arthritis murine model....... METHODS: Both male and female B10.RIII young mice were immunized with OVA and challenged four times with OVA intranasally. Samples were taken 24 h after the last challenge, and eosinophils in bronchoalveolar lavage (BAL) and parenchyma, Th-2 cytokines in BAL, total and antigen-specific IgE in sera...

  19. Motivation to Improve Work through Learning: A Conceptual Model

    Directory of Open Access Journals (Sweden)

    Kueh Hua Ng


    Full Text Available This study aims to enhance our current understanding of the transfer of training by proposing a conceptual model that supports the mediating role of motivation to improve work through learning about the relationship between social support and the transfer of training. The examination of motivation to improve work through motivation to improve work through a learning construct offers a holistic view pertaining to a learner's profile in a workplace setting, which emphasizes learning for the improvement of work performance. The proposed conceptual model is expected to benefit human resource development theory building, as well as field practitioners by emphasizing the motivational aspects crucial for successful transfer of training.

  20. Fuzzy Similarity and Fuzzy Inclusion Measures in Polyline Matching: A Case Study of Potential Streams Identification for Archaeological Modelling in GIS (United States)

    Ďuračiová, Renata; Rášová, Alexandra; Lieskovský, Tibor


    When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.

  1. Improving MWA/HERA Calibration Using Extended Radio Source Models (United States)

    Cunningham, Devin; Tasker, Nicholas; University of Washington EoR Imaging Team


    The formation of the first stars and galaxies in the universe is among the greatest mysteries in astrophysics. Using special purpose radio interferometers, it is possible to detect the faint 21 cm radio line emitted by neutral hydrogen in order to characterize the Epoch of Reionization (EoR) and the formation of the first stars and galaxies. We create better models of extended radio sources by reducing component number of deconvolved Murchison Widefield Array (MWA) data by up to 90%, while preserving real structure and flux information. This real structure is confirmed by comparisons to observations of the same extended radio sources from the TIFR GMRT Sky Survey (TGSS) and NRAO VLA Sky Survey (NVSS), which detect at a similar frequency range as the MWA. These sophisticated data reduction techniques not only offer improvements to the calibration of the MWA, but also hold applications for the future sky-based calibration of the Hydrogen Epoch of Reionization Array (HERA). This has the potential to reduce noise in the power spectra from these instruments, and consequently provide a deeper view into the window of EoR.

  2. Inspiration or deflation? Feeling similar or dissimilar to slim and plus-size models affects self-evaluation of restrained eaters. (United States)

    Papies, Esther K; Nicolaije, Kim A H


    The present studies examined the effect of perceiving images of slim and plus-size models on restrained eaters' self-evaluation. While previous research has found that such images can lead to either inspiration or deflation, we argue that these inconsistencies can be explained by differences in perceived similarity with the presented model. The results of two studies (ns=52 and 99) confirmed this and revealed that restrained eaters with high (low) perceived similarity to the model showed more positive (negative) self-evaluations when they viewed a slim model, compared to a plus-size model. In addition, Study 2 showed that inducing in participants a similarities mindset led to more positive self-evaluations after viewing a slim compared to a plus-size model, but only among restrained eaters with a relatively high BMI. These results are discussed in the context of research on social comparison processes and with regard to interventions for protection against the possible detrimental effects of media images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. A choice modelling analysis on the similarity between distribution utilities' and industrial customers' price and quality preferences

    International Nuclear Information System (INIS)

    Soederberg, Magnus


    The Swedish Electricity Act states that electricity distribution must comply with both price and quality requirements. In order to maintain efficient regulation it is necessary to firstly, define quality attributes and secondly, determine a customer's priorities concerning price and quality attributes. If distribution utilities gain an understanding of customer preferences and incentives for reporting them, the regulator can save a lot of time by surveying them rather than their customers. This study applies a choice modelling methodology where utilities and industrial customers are asked to evaluate the same twelve choice situations in which price and four specific quality attributes are varied. The preferences expressed by the utilities, and estimated by a random parameter logit, correspond quite well with the preferences expressed by the largest industrial customers. The preferences expressed by the utilities are reasonably homogenous in relation to forms of association (private limited, public and trading partnership). If the regulator acts according to the preferences expressed by the utilities, smaller industrial customers will have to pay for quality they have not asked for. (author)

  4. Similarities and differences of serotonin and its precursors in their interactions with model membranes studied by molecular dynamics simulation (United States)

    Wood, Irene; Martini, M. Florencia; Pickholz, Mónica


    In this work, we report a molecular dynamics (MD) simulations study of relevant biological molecules as serotonin (neutral and protonated) and its precursors, tryptophan and 5-hydroxy-tryptophan, in a fully hydrated bilayer of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidyl-choline (POPC). The simulations were carried out at the fluid lamellar phase of POPC at constant pressure and temperature conditions. Two guest molecules of each type were initially placed at the water phase. We have analyzed, the main localization, preferential orientation and specific interactions of the guest molecules within the bilayer. During the simulation run, the four molecules were preferentially found at the water-lipid interphase. We found that the interactions that stabilized the systems are essentially hydrogen bonds, salt bridges and cation-π. None of the guest molecules have access to the hydrophobic region of the bilayer. Besides, zwitterionic molecules have access to the water phase, while protonated serotonin is anchored in the interphase. Even taking into account that these simulations were done using a model membrane, our results suggest that the studied molecules could not cross the blood brain barrier by diffusion. These results are in good agreement with works that show that serotonin and Trp do not cross the BBB by simple diffusion.

  5. Improved Kinetic Models for High-Speed Combustion Simulation

    National Research Council Canada - National Science Library

    Montgomery, C. J; Tang, Q; Sarofim, A. F; Bockelie, M. J; Gritton, J. K; Bozzelli, J. W; Gouldin, F. C; Fisher, E. M; Chakravarthy, S


    Report developed under an STTR contract. The overall goal of this STTR project has been to improve the realism of chemical kinetics in computational fluid dynamics modeling of hydrocarbon-fueled scramjet combustors...

  6. Improved gap conductance model for the TRAC code

    International Nuclear Information System (INIS)

    Hatch, S.W.; Mandell, D.A.


    The purpose of the present work, as indicated earlier, is to improve the present constant fuel clad spacing in TRAC-P1A without significantly increasing the computer costs. It is realized that the simple model proposed may not be accurate enough for some cases, but for the initial calculations made the DELTAR model improves the predictions over the constant Δr results of TRAC-P1A and the additional computing costs are negligible

  7. Multi-Layer Identification of Highly-Potent ABCA1 Up-Regulators Targeting LXRβ Using Multiple QSAR Modeling, Structural Similarity Analysis, and Molecular Docking

    Directory of Open Access Journals (Sweden)

    Meimei Chen


    Full Text Available In this study, in silico approaches, including multiple QSAR modeling, structural similarity analysis, and molecular docking, were applied to develop QSAR classification models as a fast screening tool for identifying highly-potent ABCA1 up-regulators targeting LXRβ based on a series of new flavonoids. Initially, four modeling approaches, including linear discriminant analysis, support vector machine, radial basis function neural network, and classification and regression trees, were applied to construct different QSAR classification models. The statistics results indicated that these four kinds of QSAR models were powerful tools for screening highly potent ABCA1 up-regulators. Then, a consensus QSAR model was developed by combining the predictions from these four models. To discover new ABCA1 up-regulators at maximum accuracy, the compounds in the ZINC database that fulfilled the requirement of structural similarity of 0.7 compared to known potent ABCA1 up-regulator were subjected to the consensus QSAR model, which led to the discovery of 50 compounds. Finally, they were docked into the LXRβ binding site to understand their role in up-regulating ABCA1 expression. The excellent binding modes and docking scores of 10 hit compounds suggested they were highly-potent ABCA1 up-regulators targeting LXRβ. Overall, this study provided an effective strategy to discover highly potent ABCA1 up-regulators.

  8. Exploring morphological indicators for improved model parameterization in transport modeling (United States)

    Kumahor, Samuel K.; Vogel, Hans-Jörg


    Two phenomena that control transport of colloidal materials, including nanoparticles, are interaction at the air-water and solid-water interfaces for unsaturated flow. Current approaches for multiphase inverse modeling to quantify the associated processes utilize empirical parameters and/or assumptions to characterise these interactions. This introduces uncertainty in model outcomes. Two classical examples are: (i) application of the Young-Laplace Equation, assuming spherical air-water interfaces, to quantify interactions at the air-water interface and (ii) the choice of parameters that define the nature and shape of retention profiles for modeling straining at the solid-water interface. In this contribution, an alternate approach using some morphological indicators derived from X-ray micro-computed tomography (µ-CT) to quantify interaction at both the air-water interface and solid-water interface is presented. These indicators, related to air-water and solid-water interface densities, are thought to alleviate the deficiencies associated with modeling interaction at both the solid-water and air-water interfaces.

  9. Improvement and Validation of Weld Residual Stress Modelling Procedure

    International Nuclear Information System (INIS)

    Zang, Weilin; Gunnars, Jens; Dong, Pingsha; Hong, Jeong K.


    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  10. an improved structural model for seismic analysis of tall frames

    African Journals Online (AJOL)

    Dr Obe

    ABSTRACT. This paper proposed and examined an improved structural model that overcomes the deficiencies of the shear frame model by considering the effects of flexible horizontal members and column axial loads in seismic analysis of multi-storey frames. Matrix displacement method of analysis is used on the basis of ...

  11. A model of continuous quality improvement for health service organisations. (United States)

    Thornber, M


    Continuous Quality Improvement (or Total Quality Management) is an approach to management originally used in manufacturing and now being applied in the health services. This article describes a model of Continuous Quality Improvement which has been used in NSW public and private hospitals. The model consists of Ten Key Elements. The first driving force of this model is 'defining quality in terms of customer expectations' of quality. The second driving force emphasises that 'quality improvement is a leadership issue'. Leaders are required to: coordinate staff participation in work process analysis; train staff in the customer service orientation; lead effective meetings and negotiate with both internal and external service partners. Increased staff motivation, quality improvement and reduction in running costs are seen to be the benefits of CQI for health service organisations.

  12. Improvement of a near wake model for trailing vorticity

    DEFF Research Database (Denmark)

    Pirrung, Georg; Hansen, Morten Hartvig; Aagaard Madsen, Helge


    A near wake model, originally proposed by Beddoes, is further developed. The purpose of the model is to account for the radially dependent time constants of the fast aerodynamic response and to provide a tip loss correction. It is based on lifting line theory and models the downwash due to roughly...... the first 90 degrees of rotation. This restriction of the model to the near wake allows for using a computationally efficient indicial function algorithm. The aim of this study is to improve the accuracy of the downwash close to the root and tip of the blade and to decrease the sensitivity of the model...... to temporal discretization, both regarding numerical stability and quality of the results. The modified near wake model is coupled to an aerodynamics model, which consists of a blade element momentum model with dynamic inflow for the far wake and a 2D shed vorticity model that simulates the unsteady buildup...

  13. License plate location based on improved visual attention model (United States)

    Yao, Zhenjie; Yi, Weidong


    License plate recognition (LPR) system play an important role in intelligent transportation systems (ITSs). It is difficult to locate a license plate in complex scene. Our location strategy integrates blue region, vertical texture and contrast features of LP in the framework of improved visual attention model. We improve visual attention model by changing normalization and linear combination into feature image binarization and logical operation. Multi-scale center-surround differences mechanism in visual attention model make the feature extraction robust. Tests on pictures captured by different equipments under different environments give delightful result, the success rate for location is as high as 95.28%.

  14. Process Correlation Analysis Model for Process Improvement Identification

    Directory of Open Access Journals (Sweden)

    Su-jin Choi


    software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  15. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater


    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  16. An improved market penetration model for wind energy technology forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Lund, P.D. [Helsinki Univ. of Technology, Espoo (Finland). Advanced Energy Systems


    An improved market penetration model with application to wind energy forecasting is presented. In the model, a technology diffusion model and manufacturing learning curve are combined. Based on a 85% progress ratio that was found for European wind manufactures and on wind market statistics, an additional wind power capacity of ca 4 GW is needed in Europe to reach a 30 % price reduction. A full breakthrough to low-cost utility bulk power markets could be achieved at a 24 GW level. (author)

  17. An improved market penetration model for wind energy technology forecasting

    International Nuclear Information System (INIS)

    Lund, P.D.


    An improved market penetration model with application to wind energy forecasting is presented. In the model, a technology diffusion model and manufacturing learning curve are combined. Based on a 85% progress ratio that was found for European wind manufactures and on wind market statistics, an additional wind power capacity of ca 4 GW is needed in Europe to reach a 30 % price reduction. A full breakthrough to low-cost utility bulk power markets could be achieved at a 24 GW level. (author)

  18. Dietary information improves cardiovascular disease risk prediction models. (United States)

    Baik, I; Cho, N H; Kim, S H; Shin, C


    Data are limited on cardiovascular disease (CVD) risk prediction models that include dietary predictors. Using known risk factors and dietary information, we constructed and evaluated CVD risk prediction models. Data for modeling were from population-based prospective cohort studies comprised of 9026 men and women aged 40-69 years. At baseline, all were free of known CVD and cancer, and were followed up for CVD incidence during an 8-year period. We used Cox proportional hazard regression analysis to construct a traditional risk factor model, an office-based model, and two diet-containing models and evaluated these models by calculating Akaike information criterion (AIC), C-statistics, integrated discrimination improvement (IDI), net reclassification improvement (NRI) and calibration statistic. We constructed diet-containing models with significant dietary predictors such as poultry, legumes, carbonated soft drinks or green tea consumption. Adding dietary predictors to the traditional model yielded a decrease in AIC (delta AIC=15), a 53% increase in relative IDI (P-value for IDI NRI (category-free NRI=0.14, P NRI (category-free NRI=0.08, P<0.01) compared with the office-based model. The calibration plots for risk prediction demonstrated that the inclusion of dietary predictors contributes to better agreement in persons at high risk for CVD. C-statistics for the four models were acceptable and comparable. We suggest that dietary information may be useful in constructing CVD risk prediction models.

  19. Improving the realism of hydrologic model functioning through GRACE (United States)

    Rakovec, O.; Kumar, R.; Attinger, S.; Samaniego, L. E.


    Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains.


    Directory of Open Access Journals (Sweden)

    Zuzana Hajduova


    Full Text Available Purpose: All processes in the company play important role in ensuring functional integrated management system. We point out the importance of need for a systematic approach to the use of quantitative, but especially statistical methods for modelling the cost of the improvement activities that are part of an integrated management system. Development of integrated management systems worldwide leads towards building of systematic procedures of implementation maintenance and improvement of all systems according to the requirements of all the sides involved.Methodology: Statistical evaluation of the economic indicators of improvement costs and the need for a systematic approach to their management in terms of integrated management systems have become a key role also in the management of processes in the company Cu Drôt, a.s. The aim of this publication is to highlight the importance of proper implementation of statistical methods in the process of improvement costs management in the integrated management system of current market conditions and document the legitimacy of a systematic approach in the area of monitoring and analysing indicators of improvement with the aim of the efficient process management of company. We provide specific example of the implementation of appropriate statistical methods in the production of copper wire in a company Cu Drôt, a.s. This publication also aims to create a model for the estimation of integrated improvement costs, which through the use of statistical methods in the company Cu Drôt, a.s. is used to support decision-making on improving efficiency.Findings: In the present publication, a method for modelling the improvement process, by an integrated manner, is proposed. It is a method in which the basic attributes of the improvement in quality, safety and environment are considered and synergistically combined in the same improvement project. The work examines the use of sophisticated quantitative, especially

  1. Process correlation analysis model for process improvement identification. (United States)

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong


    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  2. Improvement of the design model for SMART fuel assembly

    International Nuclear Information System (INIS)

    Zee, Sung Kyun; Yim, Jeong Sik


    A Study on the design improvement of the TEP, BEP and Hoddown spring of a fuel assembly for SMART was performed. Cut boundary Interpolation Method was applied to get more accurate results of stress and strain distribution from the results of the coarse model calculation. The improved results were compared with that of a coarse one. The finer model predicted slightly higher stress and strain distribution than the coarse model, which meant the results of the coarse model was not converged. Considering that the test results always showed much less stress than the FEM and the location of the peak stress of the refined model, the pressure stress on the loading point seemed to contribute significantly to the stresses. Judging from the fact that the peak stress appeared only at the local area, the results of the refined model were considered enough to be a conservative prediction of the stress levels. The slot of the guide thimble screw was ignored to get how much thickness of the flow plate can be reduced in case of optimization of the thickness and also cut off the screw dent hole was included for the actual geometry. For the BEP, the leg and web were also included in the model and the results with and without the leg alignment support were compared. Finally, the holddown spring which is important during the in-reactor behavior of the FA was modeled more realistic and improved to include the effects of the friction between the leaves and the loading surface. Using this improved model, it was possible that the spring characteristics were predicted more accurate to the test results. From the analysis of the spring characteristics, the local plastic area controled the characteristics of the spring dominantly which implied that it was necessary for the design of the leaf to be optimized for the improvement of the plastic behavior of the leaf spring

  3. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah


    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  4. Demonstrating the improvement of predictive maturity of a computational model

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory; Atamturktur, Huriye S [CLEMSON UNIV.


    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  5. Concurrent validity and clinical utility of the HCR-20V3 compared with the HCR-20 in forensic mental health nursing: similar tools but improved method. (United States)

    Bjørkly, Stål; Eidhammer, Gunnar; Selmer, Lars Erik


    The main scope of this small-scale investigation was to compare clinical application of the HCR-20V3 with its predecessor, the HCR-20. To explore concurrent validity, two experienced nurses assessed 20 forensic mental health service patients with the tools. Estimates of internal consistency for the HCR-20 and the HCR-20V3 were calculated by Cronbach's alpha for two levels of measurement: the H-, C-, and R-scales and the total sum scores. We found moderate (C-scale) to good (H- and R- scales and aggregate scores) estimates of internal consistency and significant differences for the two versions of the HCR. This finding indicates that the two versions reflect common underlying dimensions and that there still appears to be differences between V2 and V3 ratings for the same patients. A case from forensic mental health was used to illustrate similarities and differences in assessment results between the two HCR-20 versions. The case illustration depicts clinical use of the HCR-20V3 and application of two structured nursing interventions pertaining to the risk management part of the tool. According to our experience, Version 3 is superior to Version 2 concerning: (a) item clarity; (b) the distinction between presence and relevance of risk factors; (c) the integration of risk formulation and risk scenario; and (d) the explicit demand to construct a risk management plan as part of the standard assessment procedure.

  6. Improving the realism of hydrologic model through multivariate parameter estimation (United States)

    Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis


    Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52,

  7. Reranking candidate gene models with cross-species comparison for improved gene prediction

    Directory of Open Access Journals (Sweden)

    Pereira Fernando CN


    Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.

  8. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Shao Jie


    Full Text Available A modeling based on the improved Elman neural network (IENN is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL model, Chebyshev neural network (CNN model, and basic Elman neural network (BENN model, the proposed model has better performance.

  9. Numerical analysis of modeling based on improved Elman neural network. (United States)

    Jie, Shao; Li, Wang; WeiSong, Zhao; YaQin, Zhong; Malekian, Reza


    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance.

  10. Improved simulation of groundwater - surface water interaction in catchment models (United States)

    teklesadik, aklilu; van Griensven, Ann; Anibas, Christian; Huysmans, Marijke


    Groundwater storage can have a significant contribution to stream flow, therefore a thorough understanding of the groundwater surface water interaction is of prime important when doing catchment modeling. The aim of this study is to improve the simulation of groundwater - surface water interaction in a catchment model of the upper Zenne River basin located in Belgium. To achieve this objective we used the "Groundwater-Surface water Flow" (GSFLOW) modeling software, which is an integration of the surface water modeling tool "Precipitation and Runoff Modeling system" (PRMS) and the groundwater modeling tool MODFLOW. For this case study, the PRMS model and MODFLOW model were built and calibrated independently. The PRMS upper Zenne River basin model is divided into 84 hydrological response units (HRUs) and is calibrated with flow data at the Tubize gauging station. The spatial discretization of the MODFLOW upper Zenne groundwater flow model consists of 100m grids. Natural groundwater divides and the Brussels-Charleroi canal are used as boundary conditions for the MODFLOW model. The model is calibrated using piezometric data. The GSFLOW results were evaluated against a SWAT model application and field observations of groundwater-surface water interactions along a cross section of the Zenne River and riparian zone. The field observations confirm that there is no exchange of groundwater beyond the Brussel-Charleroi canal and that the interaction at the river bed is relatively low. The results show that there is a significant difference in the groundwater simulations when using GSFLOW versus SWAT. This indicates that the groundwater component representation in the SWAT model could be improved and that a more realistic implementation of the interactions between groundwater and surface water is advisable. This could be achieved by integrating SWAT and MODFLOW.

  11. Kefir drink causes a significant yet similar improvement in serum lipid profile, compared with low-fat milk, in a dairy-rich diet in overweight or obese premenopausal women: A randomized controlled trial. (United States)

    Fathi, Yasamin; Ghodrati, Naeimeh; Zibaeenezhad, Mohammad-Javad; Faghih, Shiva

    Controversy exists as to whether the lipid-lowering properties of kefir drink (a fermented probiotic dairy product) in animal models could be replicated in humans. To assess and compare the potential lipid-lowering effects of kefir drink with low-fat milk in a dairy-rich diet in overweight or obese premenopausal women. In this 8-week, single-center, multiarm, parallel-group, outpatient, randomized controlled trial, 75 eligible Iranian women aged 25 to 45 years were randomly allocated to kefir, milk, or control groups. Women in the control group received a weight-maintenance diet containing 2 servings/d of low-fat dairy products, whereas subjects in the milk and kefir groups received a similar diet containing 2 additional servings/d (a total of 4 servings/d) of dairy products from low-fat milk or kefir drink, respectively. At baseline and study end point, serum levels/ratios of total cholesterol (TC), low- and high-density lipoprotein cholesterol (LDLC and HDLC), triglyceride, Non-HDLC, TC/HDLC, LDLC/HDLC, and triglyceride/LDLC were measured as outcome measures. After 8 weeks, subjects in the kefir group had significantly lower serum levels/ratios of lipoproteins than those in the control group (mean between-group differences were -10.4 mg/dL, -9.7 mg/dL, -11.5 mg/dL, -0.4, and -0.3 for TC, LDLC, non-HDLC, TC/HDLC, and LDLC/HDLC, respectively; all P kefir and milk groups. Kefir drink causes a significant yet similar improvement in serum lipid profile, compared with low-fat milk, in a dairy-rich diet in overweight or obese premenopausal women. Copyright © 2016 National Lipid Association. Published by Elsevier Inc. All rights reserved.

  12. Ab initio structural modeling of and experimental validation for Chlamydia trachomatis protein CT296 reveal structural similarity to Fe(II) 2-oxoglutarate-dependent enzymes

    Energy Technology Data Exchange (ETDEWEB)

    Kemege, Kyle E.; Hickey, John M.; Lovell, Scott; Battaile, Kevin P.; Zhang, Yang; Hefty, P. Scott (Michigan); (Kansas); (HWMRI)


    Chlamydia trachomatis is a medically important pathogen that encodes a relatively high percentage of proteins with unknown function. The three-dimensional structure of a protein can be very informative regarding the protein's functional characteristics; however, determining protein structures experimentally can be very challenging. Computational methods that model protein structures with sufficient accuracy to facilitate functional studies have had notable successes. To evaluate the accuracy and potential impact of computational protein structure modeling of hypothetical proteins encoded by Chlamydia, a successful computational method termed I-TASSER was utilized to model the three-dimensional structure of a hypothetical protein encoded by open reading frame (ORF) CT296. CT296 has been reported to exhibit functional properties of a divalent cation transcription repressor (DcrA), with similarity to the Escherichia coli iron-responsive transcriptional repressor, Fur. Unexpectedly, the I-TASSER model of CT296 exhibited no structural similarity to any DNA-interacting proteins or motifs. To validate the I-TASSER-generated model, the structure of CT296 was solved experimentally using X-ray crystallography. Impressively, the ab initio I-TASSER-generated model closely matched (2.72-{angstrom} C{alpha} root mean square deviation [RMSD]) the high-resolution (1.8-{angstrom}) crystal structure of CT296. Modeled and experimentally determined structures of CT296 share structural characteristics of non-heme Fe(II) 2-oxoglutarate-dependent enzymes, although key enzymatic residues are not conserved, suggesting a unique biochemical process is likely associated with CT296 function. Additionally, functional analyses did not support prior reports that CT296 has properties shared with divalent cation repressors such as Fur.

  13. Improved Generalized Force Model considering the Comfortable Driving Behavior

    Directory of Open Access Journals (Sweden)

    De-Jie Xu


    Full Text Available This paper presents an improved generalized force model (IGFM that considers the driver’s comfortable driving behavior. Through theoretical analysis, we propose the calculation methods of comfortable driving distance and velocity. Then the stability condition of the model is obtained by the linear stability analysis. The problems of the unrealistic acceleration of the leading car existing in the previous models were solved. Furthermore, the simulation results show that IGFM can predict correct delay time of car motion and kinematic wave speed at jam density, and it can exactly describe the driver’s behavior under an urgent case, where no collision occurs. The dynamic properties of IGFM also indicate that stability has improved compared to the generalized force model.

  14. Does segmentation always improve model performance in credit scoring?


    Bijak, Katarzyna; Thomas, Lyn C.


    Credit scoring allows for the credit risk assessment of bank customers. A single scoring model (scorecard) can be developed for the entire customer population, e.g. using logistic regression. However, it is often expected that segmentation, i.e. dividing the population into several groups and building separate scorecards for them, will improve the model performance. The most common statistical methods for segmentation are the two-step approaches, where logistic regression follows Classificati...

  15. Modelling Niche Differentiation of Co-Existing, Elusive and Morphologically Similar Species: A Case Study of Four Macaque Species in Nakai-Nam Theun National Protected Area, Laos

    Directory of Open Access Journals (Sweden)

    Camille N. Z. Coudrat


    Full Text Available Species misidentification often occurs when dealing with co-existing and morphologically similar species such as macaques, making the study of their ecology challenging. To overcome this issue, we use reliable occurrence data from camera-trap images and transect survey data to model their respective ecological niche and potential distribution locally in Nakai-Nam Theun National Protected Area (NNT NPA, central-Eastern Laos. We investigate niche differentiation of morphologically similar species using four sympatric macaque species in NNT NPA, as our model species: rhesus Macaca mulatta (Taxonomic Serial Number, TSN 180099, Northern pig-tailed M. leonina (TSN not listed; Assamese M. assamensis (TSN 573018 and stump-tailed M. arctoides (TSN 573017. We examine the implications for their conservation. We obtained occurrence data of macaque species from systematic 2006–2011 camera-trapping surveys and 2011–2012 transect surveys and model their niche and potential distribution with MaxEnt software using 25 environmental and topographic variables. The respective suitable habitat predicted for each species reveals niche segregation between the four species with a gradual geographical distribution following an environmental gradient within the study area. Camera-trapping positioned at many locations can increase elusive-species records with a relatively reduced and more systematic sampling effort and provide reliable species occurrence data. These can be used for environmental niche modelling to study niche segregation of morphologically similar species in areas where their distribution remains uncertain. Examining unresolved species' niches and potential distributions can have crucial implications for future research and species' management and conservation even in the most remote regions and for the least-known species.

  16. An Improved QTM Subdivision Model with Approximate Equal-area

    Directory of Open Access Journals (Sweden)

    ZHAO Xuesheng


    Full Text Available To overcome the defect of large area deformation in the traditional QTM subdivision model, an improved subdivision model is proposed which based on the “parallel method” and the thought of the equal area subdivision with changed-longitude-latitude. By adjusting the position of the parallel, this model ensures that the grid area between two adjacent parallels combined with no variation, so as to control area variation and variation accumulation of the QTM grid. The experimental results show that this improved model not only remains some advantages of the traditional QTM model(such as the simple calculation and the clear corresponding relationship with longitude/latitude grid, etc, but also has the following advantages: ①this improved model has a better convergence than the traditional one. The ratio of area_max/min finally converges to 1.38, far less than 1.73 of the “parallel method”; ②the grid units in middle and low latitude regions have small area variations and successive distributions; meanwhile, with the increase of subdivision level, the grid units with large variations gradually concentrate to the poles; ③the area variation of grid unit will not cumulate with the increasing of subdivision level.

  17. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  18. Improvement of a near wake model for trailing vorticity

    International Nuclear Information System (INIS)

    Pirrung, G R; Hansen, M H; Madsen, H A


    A near wake model, originally proposed by Beddoes, is further developed. The purpose of the model is to account for the radially dependent time constants of the fast aerodynamic response and to provide a tip loss correction. It is based on lifting line theory and models the downwash due to roughly the first 90 degrees of rotation. This restriction of the model to the near wake allows for using a computationally efficient indicial function algorithm. The aim of this study is to improve the accuracy of the downwash close to the root and tip of the blade and to decrease the sensitivity of the model to temporal discretization, both regarding numerical stability and quality of the results. The modified near wake model is coupled to an aerodynamics model, which consists of a blade element momentum model with dynamic inflow for the far wake and a 2D shed vorticity model that simulates the unsteady buildup of both lift and circulation in the attached flow region. The near wake model is validated against the test case of a finite wing with constant elliptical bound circulation. An unsteady simulation of the NREL 5 MW rotor shows the functionality of the coupled model

  19. Guiding and Modelling Quality Improvement in Higher Education Institutions (United States)

    Little, Daniel


    The article considers the process of creating quality improvement in higher education institutions from the point of view of current organisational theory and social-science modelling techniques. The author considers the higher education institution as a functioning complex of rules, norms and other organisational features and reviews the social…

  20. Promoting Continuous Quality Improvement in Online Teaching: The META Model (United States)

    Dittmar, Eileen; McCracken, Holly


    Experienced e-learning faculty members share strategies for implementing a comprehensive postsecondary faculty development program essential to continuous improvement of instructional skills. The high-impact META Model (centered around Mentoring, Engagement, Technology, and Assessment) promotes information sharing and content creation, and fosters…

  1. The Continuous Improvement Model: A K-12 Literacy Focus (United States)

    Brown, Jennifer V.


    The purpose of the study was to determine if the eight steps of the Continuous Improvement Model (CIM) provided a framework to raise achievement and to focus educators in identifying high-yield literacy strategies. This study sought to determine if an examination of the assessment data in reading revealed differences among schools that fully,…

  2. Improving Project Management Using Formal Models and Architectures (United States)

    Kahn, Theodore; Sturken, Ian


    This talk discusses the advantages formal modeling and architecture brings to project management. These emerging technologies have both great potential and challenges for improving information available for decision-making. The presentation covers standards, tools and cultural issues needing consideration, and includes lessons learned from projects the presenters have worked on.

  3. Improving Perovskite Solar Cells: Insights From a Validated Device Model

    NARCIS (Netherlands)

    Sherkar, Tejas S.; Momblona, Cristina; Gil-Escrig, Lidon; Bolink, Henk J.; Koster, L. Jan Anton


    To improve the efficiency of existing perovskite solar cells (PSCs), a detailed understanding of the underlying device physics during their operation is essential. Here, a device model has been developed and validated that describes the operation of PSCs and quantitatively explains the role of

  4. Improved mathematical models for particle-size distribution data ...

    African Journals Online (AJOL)

    Prior studies have suggested that particle-size distribution data of soils is central and helpful in this regard. This study proposes two improved mathematical models to describe and represent the varied particle-size distribution (PSD) data for tropically weathered residual (TWR) soils. The theoretical analysis and the ...

  5. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models (United States)

    Marquette, Michele L.; Sognier, Marguerite A.


    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  6. A Novel Real-Time Self-similar Traffic Detector/Filter to Improve the Reliability of a TCP Based End-to-End Client/Server Interaction Path for Shorter Roundtrip Time (United States)

    Lin, Wilfred W. K.; Wong, Allan K. Y.; Wu, Richard S. L.; Dillon, Tharam S.

    The self-similarity (S 2) filter is proposed for real-time applications. It can be used independently or as an extra component for the enhanced RTPD (real-time traffic pattern detector) or E-RTPD. The S 2 filter basis is the "asymptotically second-order self-similarity" concept (alternatively called statistical 2 nd OSS or S2 nd OSS) for stationary time series. The focus is the IAT (inter-arrival times) traffic. The filter is original because similar approaches are not found in the literature for detecting self-similar traffic patterns on the fly. Different experiments confirm that with help form the S 2 filter the FLC (Fuzzy Logic Controller) dynamic buffer size tuner control more accurately. As a result the FLC improves the reliability of the client/server interaction path leading to shorter roundtrip time (RTT).

  7. Development of Improved Mechanistic Deterioration Models for Flexible Pavements

    DEFF Research Database (Denmark)

    Ullidtz, Per; Ertman, Hans Larsen


    The paper describes a pilot study in Denmark with the main objective of developing improved mechanistic deterioration models for flexible pavements based on an accelerated full scale test on an instrumented pavement in the Danish Road Tessting Machine. The study was the first in "International...... Pavement Subgrade Performance Study" sponsored by the Federal Highway Administration (FHWA), USA. The paper describes in detail the data analysis and the resulting models for rutting, roughness, and a model for the plastic strain in the subgrade.The reader will get an understanding of the work needed...

  8. On improving the communication between models and data. (United States)

    Dietze, Michael C; Lebauer, David S; Kooper, Rob


    The potential for model-data synthesis is growing in importance as we enter an era of 'big data', greater connectivity and faster computation. Realizing this potential requires that the research community broaden its perspective about how and why they interact with models. Models can be viewed as scaffolds that allow data at different scales to inform each other through our understanding of underlying processes. Perceptions of relevance, accessibility and informatics are presented as the primary barriers to broader adoption of models by the community, while an inability to fully utilize the breadth of expertise and data from the community is a primary barrier to model improvement. Overall, we promote a community-based paradigm to model-data synthesis and highlight some of the tools and techniques that facilitate this approach. Scientific workflows address critical informatics issues in transparency, repeatability and automation, while intuitive, flexible web-based interfaces make running and visualizing models more accessible. Bayesian statistics provides powerful tools for assimilating a diversity of data types and for the analysis of uncertainty. Uncertainty analyses enable new measurements to target those processes most limiting our predictive ability. Moving forward, tools for information management and data assimilation need to be improved and made more accessible. © 2013 John Wiley & Sons Ltd.

  9. Improving hydrological simulations by incorporating GRACE data for model calibration (United States)

    Bai, Peng; Liu, Xiaomang; Liu, Changming


    Hydrological model parameters are typically calibrated by observed streamflow data. This calibration strategy is questioned when the simulated hydrological variables of interest are not limited to streamflow. Well-performed streamflow simulations do not guarantee the reliable reproduction of other hydrological variables. One of the reasons is that hydrological model parameters are not reasonably identified. The Gravity Recovery and Climate Experiment (GRACE)-derived total water storage change (TWSC) data provide an opportunity to constrain hydrological model parameterizations in combination with streamflow observations. In this study, a multi-objective calibration scheme based on GRACE-derived TWSC and streamflow observations was compared with the traditional single-objective calibration scheme based on only streamflow simulations. Two hydrological models were employed on 22 catchments in China with different climatic conditions. The model evaluations were performed using observed streamflows, GRACE-derived TWSC, and actual evapotranspiration (ET) estimates from flux towers and from the water balance approach. Results showed that the multi-objective calibration scheme provided more reliable TWSC and ET simulations without significant deterioration in the accuracy of streamflow simulations than the single-objective calibration. The improvement in TWSC and ET simulations was more significant in relatively dry catchments than in relatively wet catchments. In addition, hydrological models calibrated using GRACE-derived TWSC data alone cannot obtain accurate runoff simulations in ungauged catchments. This study highlights the importance of including additional constraints in addition to streamflow observations to improve performances of hydrological models.

  10. Function Modelling Of The Market And Assessing The Degree Of Similarity Between Real Properties - Dependent Or Independent Procedures In The Process Of Office Property Valuation

    Directory of Open Access Journals (Sweden)

    Barańska Anna


    Full Text Available Referring to the developed and presented in previous publications (e.g. Barańska 2011 two-stage algorithm for real estate valuation, this article addresses the problem of the relationship between the two stages of the algorithm. An essential part of the first stage is the multi-dimensional function modelling of the real estate market. As a result of selecting the model best fitted to the market data, in which the dependent variable is always the price of a real property, a set of market attributes is obtained, which in this model are considered to be price-determining. In the second stage, from the collection of real estate which served as a database in the process of estimating model parameters, the selected objects are those which are most similar to the one subject to valuation and form the basis for predicting the final value of the property being valued. Assessing the degree of similarity between real properties can be carried out based on the full spectrum of real estate attributes that potentially affect their value and which it is possible to gather information about, or only on the basis of those attributes which were considered to be price-determining in function modelling. It can also be performed by various methods. This article has examined the effect of various approaches on the final value of the property obtained using the two-stage prediction. In order fulfill the study aim precisely as possible, the results of each calculation step of the algorithm have been investigated in detail. Each of them points to the independence of the two procedures.

  11. Improvements on Semi-Classical Distorted-Wave model

    Energy Technology Data Exchange (ETDEWEB)

    Sun Weili; Watanabe, Y.; Kuwata, R. [Kyushu Univ., Fukuoka (Japan); Kohno, M.; Ogata, K.; Kawai, M.


    A method of improving the Semi-Classical Distorted Wave (SCDW) model in terms of the Wigner transform of the one-body density matrix is presented. Finite size effect of atomic nuclei can be taken into account by using the single particle wave functions for harmonic oscillator or Wood-Saxon potential, instead of those based on the local Fermi-gas model which were incorporated into previous SCDW model. We carried out a preliminary SCDW calculation of 160 MeV (p,p`x) reaction on {sup 90}Zr with the Wigner transform of harmonic oscillator wave functions. It is shown that the present calculation of angular distributions increase remarkably at backward angles than the previous ones and the agreement with the experimental data is improved. (author)

  12. Improved dual sided doped memristor: modelling and applications

    Directory of Open Access Journals (Sweden)

    Anup Shrivastava


    Full Text Available Memristor as a novel and emerging electronic device having vast range of applications suffer from poor frequency response and saturation length. In this paper, the authors present a novel and an innovative device structure for the memristor with two active layers and its non-linear ionic drift model for an improved frequency response and saturation length. The authors investigated and compared the I–V characteristics for the proposed model with the conventional memristors and found better results in each case (different window functions for the proposed dual sided doped memristor. For circuit level simulation, they developed a SPICE model of the proposed memristor and designed some logic gates based on hybrid complementary metal oxide semiconductor memristive logic (memristor ratioed logic. The proposed memristor yields improved results in terms of noise margin, delay time and dynamic hazards than that of the conventional memristors (single active layer memristors.

  13. An Improved Nonlinear Five-Point Model for Photovoltaic Modules

    Directory of Open Access Journals (Sweden)

    Sakaros Bogning Dongue


    Full Text Available This paper presents an improved nonlinear five-point model capable of analytically describing the electrical behaviors of a photovoltaic module for each generic operating condition of temperature and solar irradiance. The models used to replicate the electrical behaviors of operating PV modules are usually based on some simplified assumptions which provide convenient mathematical model which can be used in conventional simulation tools. Unfortunately, these assumptions cause some inaccuracies, and hence unrealistic economic returns are predicted. As an alternative, we used the advantages of a nonlinear analytical five-point model to take into account the nonideal diode effects and nonlinear effects generally ignored, which PV modules operation depends on. To verify the capability of our method to fit PV panel characteristics, the procedure was tested on three different panels. Results were compared with the data issued by manufacturers and with the results obtained using the five-parameter model proposed by other authors.

  14. A class of vertex-edge-growth small-world network models having scale-free, self-similar and hierarchical characters (United States)

    Ma, Fei; Su, Jing; Hao, Yongxing; Yao, Bing; Yan, Guanghui


    The problem of uncovering the internal operating function of network models is intriguing, demanded and attractive in researches of complex networks. Notice that, in the past two decades, a great number of artificial models are built to try to answer the above mentioned task. Based on the different growth ways, these previous models can be divided into two categories, one type, possessing the preferential attachment, follows a power-law P(k) ∼k-γ, 2 < γ < 3. The other has exponential-scaling feature, P(k) ∼α-k. However, there are no models containing above two kinds of growth ways to be presented, even the study of interconnection between these two growth manners in the same model is lacking. Hence, in this paper, we construct a class of planar and self-similar graphs motivated from a new attachment way, vertex-edge-growth network-operation, more precisely, the couple of both them. We report that this model is sparse, small world and hierarchical. And then, not only is scale-free feature in our model, but also lies the degree parameter γ(≈ 3 . 242) out the typical range. Note that, we suggest that the coexistence of multiple vertex growth ways will have a prominent effect on the power-law parameter γ, and the preferential attachment plays a dominate role on the development of networks over time. At the end of this paper, we obtain an exact analytical expression for the total number of spanning trees of models and also capture spanning trees entropy which we have compared with those of their corresponding component elements.

  15. Estimating Evapotranspiration from an Improved Two-Source Energy Balance Model Using ASTER Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Qifeng Zhuang


    Full Text Available Reliably estimating the turbulent fluxes of latent and sensible heat at the Earth’s surface by remote sensing is important for research on the terrestrial hydrological cycle. This paper presents a practical approach for mapping surface energy fluxes using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER images from an improved two-source energy balance (TSEB model. The original TSEB approach may overestimate latent heat flux under vegetative stress conditions, as has also been reported in recent research. We replaced the Priestley-Taylor equation used in the original TSEB model with one that uses plant moisture and temperature constraints based on the PT-JPL model to obtain a more accurate canopy latent heat flux for model solving. The collected ASTER data and field observations employed in this study are over corn fields in arid regions of the Heihe Watershed Allied Telemetry Experimental Research (HiWATER area, China. The results were validated by measurements from eddy covariance (EC systems, and the surface energy flux estimates of the improved TSEB model are similar to the ground truth. A comparison of the results from the original and improved TSEB models indicates that the improved method more accurately estimates the sensible and latent heat fluxes, generating more precise daily evapotranspiration (ET estimate under vegetative stress conditions.

  16. Educators and students prefer traditional clinical education to a peer-assisted learning model, despite similar student performance outcomes: a randomised trial

    Directory of Open Access Journals (Sweden)

    Samantha Sevenhuysen


    Full Text Available Question: What is the efficacy and acceptability of a peer-assisted learning model compared with a traditional model for paired students in physiotherapy clinical education? Design: Prospective, assessor-blinded, randomised crossover trial. Participants: Twenty-four physiotherapy students in the third year of a 4-year undergraduate degree. Intervention: Participants each completed 5 weeks of clinical placement, utilising a peer-assisted learning model (a standardised series of learning activities undertaken by student pairs and educators to facilitate peer interaction using guided strategies and a traditional model (usual clinical supervision and learning activities led by clinical educators supervising pairs of students. Outcome measures: The primary outcome measure was student performance, rated on the Assessment of Physiotherapy Practice by a blinded assessor, the supervising clinical educator and by the student in self-assessment. Secondary outcome measures were satisfaction with the teaching and learning experience measured via survey, and statistics on services delivered. Results: There were no significant between-group differences in Assessment of Physiotherapy Practice scores as rated by the blinded assessor (p = 0.43, the supervising clinical educator (p = 0.94 or the students (p = 0.99. In peer-assisted learning, clinical educators had an extra 6 minutes/day available for non-student-related quality activities (95% CI 1 to 10 and students received an additional 0.33 entries/day of written feedback from their educator (95% CI 0.06 to 0.61. Clinical educator satisfaction and student satisfaction were higher with the traditional model. Conclusion: The peer-assisted learning model trialled in the present study produced similar student performance outcomes when compared with a traditional approach. Peer-assisted learning provided some benefits to educator workload and student feedback, but both educators and students were more

  17. Educators and students prefer traditional clinical education to a peer-assisted learning model, despite similar student performance outcomes: a randomised trial. (United States)

    Sevenhuysen, Samantha; Skinner, Elizabeth H; Farlie, Melanie K; Raitman, Lyn; Nickson, Wendy; Keating, Jennifer L; Maloney, Stephen; Molloy, Elizabeth; Haines, Terry P


    What is the efficacy and acceptability of a peer-assisted learning model compared with a traditional model for paired students in physiotherapy clinical education? Prospective, assessor-blinded, randomised crossover trial. Twenty-four physiotherapy students in the third year of a 4-year undergraduate degree. Participants each completed 5 weeks of clinical placement, utilising a peer-assisted learning model (a standardised series of learning activities undertaken by student pairs and educators to facilitate peer interaction using guided strategies) and a traditional model (usual clinical supervision and learning activities led by clinical educators supervising pairs of students). The primary outcome measure was student performance, rated on the Assessment of Physiotherapy Practice by a blinded assessor, the supervising clinical educator and by the student in self-assessment. Secondary outcome measures were satisfaction with the teaching and learning experience measured via survey, and statistics on services delivered. There were no significant between-group differences in Assessment of Physiotherapy Practice scores as rated by the blinded assessor (p=0.43), the supervising clinical educator (p=0.94) or the students (p=0.99). In peer-assisted learning, clinical educators had an extra 6 minutes/day available for non-student-related quality activities (95% CI 1 to 10) and students received an additional 0.33 entries/day of written feedback from their educator (95% CI 0.06 to 0.61). Clinical educator satisfaction and student satisfaction were higher with the traditional model. The peer-assisted learning model trialled in the present study produced similar student performance outcomes when compared with a traditional approach. Peer-assisted learning provided some benefits to educator workload and student feedback, but both educators and students were more satisfied with the traditional model. ACTRN12610000859088. [Sevenhuysen S, Skinner EH, Farlie MK, Raitman L, Nickson

  18. Studies of a general flat space/boson star transition model in a box through a language similar to holographic superconductors (United States)

    Peng, Yan


    We study a general flat space/boson star transition model in quasi-local ensemble through approaches familiar from holographic superconductor theories. We manage to find a parameter ψ 2, which is proved to be useful in disclosing properties of phase transitions. In this work, we explore effects of the scalar mass, scalar charge and Stückelberg mechanism on the critical phase transition points and the order of transitions mainly from behaviors of the parameter ψ 2. We mention that properties of transitions in quasi-local gravity are strikingly similar to those in holographic superconductor models. We also obtain an analytical relation ψ 2 ∝ ( μ - μ c )1/2, which also holds for the condensed scalar operator in the holographic insulator/superconductor system in accordance with mean field theories.

  19. An improved steam generator model for the SASSYS code

    International Nuclear Information System (INIS)

    Pizzica, P.A.


    A new steam generator model has been developed for the SASSYS computer code, which analyzes accident conditions in a liquid metal cooled fast reactor. It has been incorporated into the new SASSYS balance-of-plant model but it can also function on a stand-alone basis. The steam generator can be used in a once-through mode, or a variant of the model can be used as a separate evaporator and a superheater with recirculation loop. The new model provides for an exact steady-state solution as well as the transient calculation. There was a need for a faster and more flexible model than the old steam generator model. The new model provides for more detail with its multi-mode treatment as opposed to the previous model's one node per region approach. Numerical instability problems which were the result of cell-centered spatial differencing, fully explicit time differencing, and the moving boundary treatment of the boiling crisis point in the boiling region have been reduced. This leads to an increase in speed as larger time steps can now be taken. The new model is an improvement in many respects. 2 refs., 3 figs

  20. An improved active contour model for glacial lake extraction (United States)

    Zhao, H.; Chen, F.; Zhang, M.


    Active contour model is a widely used method in visual tracking and image segmentation. Under the driven of objective function, the initial curve defined in active contour model will evolve to a stable condition - a desired result in given image. As a typical region-based active contour model, C-V model has a good effect on weak boundaries detection and anti noise ability which shows great potential in glacial lake extraction. Glacial lake is a sensitive indicator for reflecting global climate change, therefore accurate delineate glacial lake boundaries is essential to evaluate hydrologic environment and living environment. However, the current method in glacial lake extraction mainly contains water index method and recognition classification method are diffcult to directly applied in large scale glacial lake extraction due to the diversity of glacial lakes and masses impacted factors in the image, such as image noise, shadows, snow and ice, etc. Regarding the abovementioned advantanges of C-V model and diffcults in glacial lake extraction, we introduce the signed pressure force function to improve the C-V model for adapting to processing of glacial lake extraction. To inspect the effect of glacial lake extraction results, three typical glacial lake development sites were selected, include Altai mountains, Centre Himalayas, South-eastern Tibet, and Landsat8 OLI imagery was conducted as experiment data source, Google earth imagery as reference data for varifying the results. The experiment consequence suggests that improved active contour model we proposed can effectively discriminate the glacial lakes from complex backgound with a higher Kappa Coefficient - 0.895, especially in some small glacial lakes which belongs to weak information in the image. Our finding provide a new approach to improved accuracy under the condition of large proportion of small glacial lakes and the possibility for automated glacial lake mapping in large-scale area.

  1. Domain analysis and modeling to improve comparability of health statistics. (United States)

    Okada, M; Hashimoto, H; Ohida, T


    Health statistics is an essential element to improve the ability of managers of health institutions, healthcare researchers, policy makers, and health professionals to formulate appropriate course of reactions and to make decisions based on evidence. To ensure adequate health statistics, standards are of critical importance. A study on healthcare statistics domain analysis is underway in an effort to improve usability and comparability of health statistics. The ongoing study focuses on structuring the domain knowledge and making the knowledge explicit with a data element dictionary being the core. Supplemental to the dictionary are a domain term list, a terminology dictionary, and a data model to help organize the concepts constituting the health statistics domain.

  2. Recent Improvements to the Calibration Models for RXTE/PCA (United States)

    Jahoda, K.


    We are updating the calibration of the PCA to correct for slow variations, primarily in energy to channel relationship. We have also improved the physical model in the vicinity of the Xe K-edge, which should increase the reliability of continuum fits above 20 keV. The improvements to the matrix are especially important to simultaneous observations, where the PCA is often used to constrain the continuum while other higher resolution spectrometers are used to study the shape of lines and edges associated with Iron.

  3. Biodiversity and Climate Modeling Workshop Series: Identifying gaps and needs for improving large-scale biodiversity models (United States)

    Weiskopf, S. R.; Myers, B.; Beard, T. D.; Jackson, S. T.; Tittensor, D.; Harfoot, M.; Senay, G. B.


    At the global scale, well-accepted global circulation models and agreed-upon scenarios for future climate from the Intergovernmental Panel on Climate Change (IPCC) are available. In contrast, biodiversity modeling at the global scale lacks analogous tools. While there is great interest in development of similar bodies and efforts for international monitoring and modelling of biodiversity at the global scale, equivalent modelling tools are in their infancy. This lack of global biodiversity models compared to the extensive array of general circulation models provides a unique opportunity to bring together climate, ecosystem, and biodiversity modeling experts to promote development of integrated approaches in modeling global biodiversity. Improved models are needed to understand how we are progressing towards the Aichi Biodiversity Targets, many of which are not on track to meet the 2020 goal, threatening global biodiversity conservation, monitoring, and sustainable use. We brought together biodiversity, climate, and remote sensing experts to try to 1) identify lessons learned from the climate community that can be used to improve global biodiversity models; 2) explore how NASA and other remote sensing products could be better integrated into global biodiversity models and 3) advance global biodiversity modeling, prediction, and forecasting to inform the Aichi Biodiversity Targets, the 2030 Sustainable Development Goals, and the Intergovernmental Platform on Biodiversity and Ecosystem Services Global Assessment of Biodiversity and Ecosystem Services. The 1st In-Person meeting focused on determining a roadmap for effective assessment of biodiversity model projections and forecasts by 2030 while integrating and assimilating remote sensing data and applying lessons learned, when appropriate, from climate modeling. Here, we present the outcomes and lessons learned from our first E-discussion and in-person meeting and discuss the next steps for future meetings.

  4. Improvement for Amelioration Inventory Model with Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Han-Wen Tuan


    Full Text Available Most inventory models dealt with deteriorated items. On the contrary, just a few papers considered inventory systems under amelioration environment. We study an amelioration inventory model with Weibull distribution. However, there are some questionable results in the amelioration paper. We will first point out those questionable results in the previous paper that did not derive the optimal solution and then provide some improvements. We will provide a rigorous analytical work for different cases dependent on the size of the shape parameter. We present a detailed numerical example for different ranges of the sharp parameter to illustrate that our solution method attains the optimal solution. We developed a new amelioration model and then provided a detailed analyzed procedure to find the optimal solution. Our findings will help researchers develop their new inventory models.

  5. Multilevel models improve precision and speed of IC50 estimates. (United States)

    Vis, Daniel J; Bombardelli, Lorenzo; Lightfoot, Howard; Iorio, Francesco; Garnett, Mathew J; Wessels, Lodewyk Fa


    Experimental variation in dose-response data of drugs tested on cell lines result in inaccuracies in the estimate of a key drug sensitivity characteristic: the IC50. We aim to improve the precision of the half-limiting dose (IC50) estimates by simultaneously employing all dose-responses across all cell lines and drugs, rather than using a single drug-cell line response. We propose a multilevel mixed effects model that takes advantage of all available dose-response data. The new estimates are highly concordant with the currently used Bayesian model when the data are well behaved. Otherwise, the multilevel model is clearly superior. The multilevel model yields a significant reduction of extreme IC50 estimates, an increase in precision and it runs orders of magnitude faster.


    Directory of Open Access Journals (Sweden)

    L. V. Ursulyak


    Full Text Available Purpose. Using scientific publications the paper analyzes the mathematical models developed in Ukraine, CIS countries and abroad for theoretical studies of train dynamics and also shows the urgency of their further improvement. Methodology. Information base of the research was official full-text and abstract databases, scientific works of domestic and foreign scientists, professional periodicals, materials of scientific and practical conferences, methodological materials of ministries and departments. Analysis of publications on existing mathematical models used to solve a wide range of problems associated with the train dynamics study shows the expediency of their application. Findings. The results of these studies were used in: 1 design of new types of draft gears and air distributors; 2 development of methods for controlling the movement of conventional and connected trains; 3 creation of appropriate process flow diagrams; 4 development of energy-saving methods of train driving; 5 revision of the Construction Codes and Regulations (SNiP ΙΙ-39.76; 6 when selecting the parameters of the autonomous automatic control system, created in DNURT, for an auxiliary locomotive that is part of a connected train; 7 when creating computer simulators for the training of locomotive drivers; 8 assessment of the vehicle dynamic indices characterizing traffic safety. Scientists around the world conduct numerical experiments related to estimation of train dynamics using mathematical models that need to be constantly improved. Originality. The authors presented the main theoretical postulates that allowed them to develop the existing mathematical models for solving problems related to the train dynamics. The analysis of scientific articles published in Ukraine, CIS countries and abroad allows us to determine the most relevant areas of application of mathematical models. Practicalvalue. The practical value of the results obtained lies in the scientific validity

  7. Improved animal models for testing gene therapy for atherosclerosis. (United States)

    Du, Liang; Zhang, Jingwan; De Meyer, Guido R Y; Flynn, Rowan; Dichek, David A


    Gene therapy delivered to the blood vessel wall could augment current therapies for atherosclerosis, including systemic drug therapy and stenting. However, identification of clinically useful vectors and effective therapeutic transgenes remains at the preclinical stage. Identification of effective vectors and transgenes would be accelerated by availability of animal models that allow practical and expeditious testing of vessel-wall-directed gene therapy. Such models would include humanlike lesions that develop rapidly in vessels that are amenable to efficient gene delivery. Moreover, because human atherosclerosis develops in normal vessels, gene therapy that prevents atherosclerosis is most logically tested in relatively normal arteries. Similarly, gene therapy that causes atherosclerosis regression requires gene delivery to an existing lesion. Here we report development of three new rabbit models for testing vessel-wall-directed gene therapy that either prevents or reverses atherosclerosis. Carotid artery intimal lesions in these new models develop within 2-7 months after initiation of a high-fat diet and are 20-80 times larger than lesions in a model we described previously. Individual models allow generation of lesions that are relatively rich in either macrophages or smooth muscle cells, permitting testing of gene therapy strategies targeted at either cell type. Two of the models include gene delivery to essentially normal arteries and will be useful for identifying strategies that prevent lesion development. The third model generates lesions rapidly in vector-naïve animals and can be used for testing gene therapy that promotes lesion regression. These models are optimized for testing helper-dependent adenovirus (HDAd)-mediated gene therapy; however, they could be easily adapted for testing of other vectors or of different types of molecular therapies, delivered directly to the blood vessel wall. Our data also supports the promise of HDAd to deliver long

  8. Method of similarity for cavitation

    Energy Technology Data Exchange (ETDEWEB)

    Espanet, L.; Tekatlian, A.; Barbier, D. [CEA/Cadarache, Dept. d' Etudes des Combustibles (DEC), 13 - Saint-Paul-lez-Durance (France); Gouin, H. [Aix-Marseille-3 Univ., 13 - Marseille (France). Laboratoire de Modelisation en Mecanique et Thermodynamique


    The knowledge of possible cavitation in subassembly nozzles of the fast reactor core implies the realization of a fluid dynamic model test. We propose a method of similarity based on the non-dimensionalization of the equation of motion for viscous capillarity fluid issued from the Cahn and Hilliard model. Taking into account the dissolved gas effect, a condition of compatibility is determined. This condition must be respected by the fluid in experiment, along with the scaling between the two similar flows. (author)

  9. Method of similarity for cavitation

    International Nuclear Information System (INIS)

    Espanet, L.; Tekatlian, A.; Barbier, D.; Gouin, H.


    The knowledge of possible cavitation in subassembly nozzles of the fast reactor core implies the realization of a fluid dynamic model test. We propose a method of similarity based on the non-dimensionalization of the equation of motion for viscous capillarity fluid issued from the Cahn and Hilliard model. Taking into account the dissolved gas effect, a condition of compatibility is determined. This condition must be respected by the fluid in experiment, along with the scaling between the two similar flows. (author)

  10. An improved thermal model for the computer code NAIAD

    International Nuclear Information System (INIS)

    Rainbow, M.T.


    An improved thermal model, based on the concept of heat slabs, has been incorporated as an option into the thermal hydraulic computer code NAIAD. The heat slabs are one-dimensional thermal conduction models with temperature independent thermal properties which may be internal and/or external to the fluid. Thermal energy may be added to or removed from the fluid via heat slabs and passed across the external boundary of external heat slabs at a rate which is a linear function of the external surface temperatures. The code input for the new option has been restructured to simplify data preparation. A full description of current input requirements is presented

  11. Fuel assembly bow: analytical modeling and resulting design improvements

    International Nuclear Information System (INIS)

    Stabel, J.; Huebsch, H.P.


    The bowing of fuel assemblies may result in a contact between neighbouring fuel assemblies and in connection with a vibration to a resulting wear or even perforation at the corners of the spacer grids of neighbouring assemblies. Such events allowed reinsertion of a few fuel assemblies in Germany only after spacer repair. In order to identify the most sensitive parameters causing the observed bowing of fuel assemblies a new computer model was develop which takes into a account the highly nonlinear behaviour of the interaction between fuel rods and spacers. As a result of the studies performed with this model, design improvements such as a more rigid connection between guide thimbles and spacer grids, could be defined. First experiences with this improved design show significantly better fuel behaviour. (author). 5 figs., 1 tabs

  12. Modelling the Role of Human Resource Management in Continuous Improvement

    DEFF Research Database (Denmark)

    Jørgensen, Frances; Hyland, Paul; Kofoed, Lise B.


    Although it is widely acknowledged that both Human Resource Management (HRM) and Continuous Improvement have the potential to positively influencing organizational performance, very little attention has been given to how certain HRM practices may support CI, and consequently, a company's performa......Although it is widely acknowledged that both Human Resource Management (HRM) and Continuous Improvement have the potential to positively influencing organizational performance, very little attention has been given to how certain HRM practices may support CI, and consequently, a company...... of the paper is theoretical in nature, as the model developed provides a greater understanding of how HRM can contribute to CI; however, the model also has practical value in that it suggests important relationships between various HRM practices and the behaviors necessary for successful CI. The paper...

  13. Palaeogenomics in cereals: modeling of ancestors for modern species improvement. (United States)

    Salse, Jérôme; Feuillet, Catherine


    During the last decade, technological improvements led to the development of large sets of plant genomic resources permitting the emergence of high-resolution comparative genomic studies. Synteny-based identification of seven shared duplications in cereals led to the modeling of a common ancestral genome structure of 33.6 Mb structured in five protochromosomes containing 9138 protogenes and provided new insights into the evolution of cereal genomes from their extinct ancestors. Recent palaeogenomic data indicate that whole genome duplications were a driving force in the evolutionary success of cereals over the last 50 to 70 millions years. Finally, detailed synteny and duplication relationships led to an improved representation of cereal genomes in concentric circles, thus providing a new reference tool for improved gene annotation and cross-genome markers development. Copyright © 2011 Académie des sciences. Published by Elsevier SAS. All rights reserved.

  14. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby


    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  15. 2009 pandemic H1N1 influenza virus elicits similar clinical course but differential host transcriptional response in mouse, macaque, and swine infection models (United States)


    Background The 2009 pandemic H1N1 influenza virus emerged in swine and quickly became a major global health threat. In mouse, non human primate, and swine infection models, the pH1N1 virus efficiently replicates in the lung and induces pro-inflammatory host responses; however, whether similar or different cellular pathways were impacted by pH1N1 virus across independent infection models remains to be further defined. To address this we have performed a comparative transcriptomic analysis of acute phase responses to a single pH1N1 influenza virus, A/California/04/2009 (CA04), in the lung of mice, macaques and swine. Results Despite similarities in the clinical course, we observed differences in inflammatory molecules elicited, and the kinetics of their gene expression changes across all three species. We found genes associated with the retinoid X receptor (RXR) signaling pathway known to control pro-inflammatory and metabolic processes that were differentially regulated during infection in each species, though the heterodimeric RXR partner, pathway associated signaling molecules, and gene expression patterns varied among the three species. Conclusions By comparing transcriptional changes in the context of clinical and virological measures, we identified differences in the host transcriptional response to pH1N1 virus across independent models of acute infection. Antiviral resistance and the emergence of new influenza viruses have placed more focus on developing drugs that target the immune system. Underlying overt clinical disease are molecular events that suggest therapeutic targets identified in one host may not be appropriate in another. PMID:23153050

  16. Can fire atlas data improve species distribution model projections? (United States)

    Crimmins, Shawn M; Dobrowski, Solomon Z; Mynsberge, Alison R; Safford, Hugh D


    Correlative species distribution models (SDMs) are widely used in studies of climate change impacts, yet are often criticized for failing to incorporate disturbance processes that can influence species distributions. Here we use two temporally independent data sets of vascular plant distributions, climate data, and fire atlas data to examine the influence of disturbance history on SDM projection accuracy through time in the mountain ranges of California, USA. We used hierarchical partitioning to examine the influence of fire occurrence on the distribution of 144 vascular plant species and built a suite of SDMs to examine how the inclusion of fire-related predictors (fire occurrence and departure from historical fire return intervals) affects SDM projection accuracy. Fire occurrence provided the least explanatory power among predictor variables for predicting species' distributions, but provided improved explanatory power for species whose regeneration is tied closely to fire. A measure of the departure from historic fire return interval had greater explanatory power for calibrating modern SDMs than fire occurrence. This variable did not improve internal model accuracy for most species, although it did provide marginal improvement to models for species adapted to high-frequency fire regimes. Fire occurrence and fire return interval departure were strongly related to the climatic covariates used in SDM development, suggesting that improvements in model accuracy may not be expected due to limited additional explanatory power. Our results suggest that the inclusion of coarse-scale measures of disturbance in SDMs may not be necessary to predict species distributions under climate change, particularly for disturbance processes that are largely mediated by climate.

  17. A communication tool to improve the patient journey modeling process. (United States)

    Curry, Joanne; McGregor, Carolyn; Tracy, Sally


    Quality improvement is high on the agenda of Health Care Organisations (HCO) worldwide. Patient journey modeling is a relatively recent innovation in healthcare quality improvement that models the patient's movement through the HCO by viewing it from a patient centric perspective. Critical to the success of the redesigning care process is the involvement of all stakeholders and their commitment to actively participate in the process. Tools which promote this type of communication are a critical enabler that can significantly affect the overall process redesign outcomes. Such a tool must also be able to incorporate additional factors such as relevant policies and procedures, staff roles, system usage and measurements such as process time and cost. This paper presents a graphically based communication tool that can be used as part of the patient journey modeling process to promote stakeholder involvement, commitment and ownership as well highlighting the relationship of other relevant variables that contribute to the patient's journey. Examples of how the tool has been used and the framework employed are demonstrated via a midwife-led primary care case study. A key contribution of this research is the provision of a graphical communication framework that is simple to use, is easily understood by a diverse range of stakeholders and enables ready recognition of patient journey issues. Results include strong stakeholder buy-in and significant enhancement to the overall design of the future patient journey. Initial results indicate that the use of such a communication tool can improve the patient journey modeling process and the overall quality improvement outcomes.

  18. An improved experimental model for peripheral neuropathy in rats

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Q.M.; Rossaneis, A.C.; Fais, R.S.; Prado, W.A. [Departamento de Farmacologia, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP (Brazil)


    A modification of the Bennett and Xie chronic constriction injury model of peripheral painful neuropathy was developed in rats. Under tribromoethanol anesthesia, a single ligature with 100% cotton glace thread was placed around the right sciatic nerve proximal to its trifurcation. The change in the hind paw reflex threshold after mechanical stimulation observed with this modified model was compared to the change in threshold observed in rats subjected to the Bennett and Xie or the Kim and Chung spinal ligation models. The mechanical threshold was measured with an automated electronic von Frey apparatus 0, 2, 7, and 14 days after surgery, and this threshold was compared to that measured in sham rats. All injury models produced significant hyperalgesia in the operated hind limb. The modified model produced mean ± SD thresholds in g (19.98 ± 3.08, 14.98 ± 1.86, and 13.80 ± 1.00 at 2, 7, and 14 days after surgery, respectively) similar to those obtained with the spinal ligation model (20.03 ± 1.99, 13.46 ± 2.55, and 12.46 ± 2.38 at 2, 7, and 14 days after surgery, respectively), but less variable when compared to the Bennett and Xie model (21.20 ± 8.06, 18.61 ± 7.69, and 18.76 ± 6.46 at 2, 7, and 14 days after surgery, respectively). The modified method required less surgical skill than the spinal nerve ligation model.

  19. An improved experimental model for peripheral neuropathy in rats

    Directory of Open Access Journals (Sweden)

    Q.M. Dias

    Full Text Available A modification of the Bennett and Xie chronic constriction injury model of peripheral painful neuropathy was developed in rats. Under tribromoethanol anesthesia, a single ligature with 100% cotton glace thread was placed around the right sciatic nerve proximal to its trifurcation. The change in the hind paw reflex threshold after mechanical stimulation observed with this modified model was compared to the change in threshold observed in rats subjected to the Bennett and Xie or the Kim and Chung spinal ligation models. The mechanical threshold was measured with an automated electronic von Frey apparatus 0, 2, 7, and 14 days after surgery, and this threshold was compared to that measured in sham rats. All injury models produced significant hyperalgesia in the operated hind limb. The modified model produced mean ± SD thresholds in g (19.98 ± 3.08, 14.98 ± 1.86, and 13.80 ± 1.00 at 2, 7, and 14 days after surgery, respectively similar to those obtained with the spinal ligation model (20.03 ± 1.99, 13.46 ± 2.55, and 12.46 ± 2.38 at 2, 7, and 14 days after surgery, respectively, but less variable when compared to the Bennett and Xie model (21.20 ± 8.06, 18.61 ± 7.69, and 18.76 ± 6.46 at 2, 7, and 14 days after surgery, respectively. The modified method required less surgical skill than the spinal nerve ligation model.

  20. An Improved Dynamic Model for the Respiratory Response to Exercise

    Directory of Open Access Journals (Sweden)

    Leidy Y. Serna


    Full Text Available Respiratory system modeling has been extensively studied in steady-state conditions to simulate sleep disorders, to predict its behavior under ventilatory diseases or stimuli and to simulate its interaction with mechanical ventilation. Nevertheless, the studies focused on the instantaneous response are limited, which restricts its application in clinical practice. The aim of this study is double: firstly, to analyze both dynamic and static responses of two known respiratory models under exercise stimuli by using an incremental exercise stimulus sequence (to analyze the model responses when step inputs are applied and experimental data (to assess prediction capability of each model. Secondly, to propose changes in the models' structures to improve their transient and stationary responses. The versatility of the resulting model vs. the other two is shown according to the ability to simulate ventilatory stimuli, like exercise, with a proper regulation of the arterial blood gases, suitable constant times and a better adjustment to experimental data. The proposed model adjusts the breathing pattern every respiratory cycle using an optimization criterion based on minimization of work of breathing through regulation of respiratory frequency.

  1. [Improvement of genetics teaching using literature-based learning model]. (United States)

    Liang, Liang; Liang, Shi-qian; Qin, Hong-yan; Ji, Yong; Han, Hua


    Genetics is one of the most important courses for undergraduate students majoring in life science. In recent years, new knowledge and technologies are continually updated with deeper understanding of life science. However, the teaching model of genetics is still based on theoretical instruction, which makes the abstract principles hard to understand by students and directly affects the teaching effect. Thus, exploring a new teaching model is necessary. We have carried out a new teaching model, literature-based learning, in the course on Microbial Genetics for undergraduate students majoring in biotechnology since 2010. Here we comprehensively analyzed the implementation and application value of this model including pre-course knowledge, how to choose professional literature, how to organize teaching process and the significance of developing this new teaching model for students and teachers. Our literature-based learning model reflects the combination of "cutting-edge" and "classic" and makes book knowledge easy to understand, which improves students' learning effect, stimulates their interests, expands their perspectives and develops their ability. This practice provides novel insight into exploring new teaching model of genetics and cultivating medical talents capable of doing both basic and clinical research in the "precision medicine" era.

  2. Improved dust representation in the Community Atmosphere Model (United States)

    Albani, S.; Mahowald, N. M.; Perry, A. T.; Scanza, R. A.; Zender, C. S.; Heavens, N. G.; Maggi, V.; Kok, J. F.; Otto-Bliesner, B. L.


    Aerosol-climate interactions constitute one of the major sources of uncertainty in assessing changes in aerosol forcing in the anthropocene as well as understanding glacial-interglacial cycles. Here we focus on improving the representation of mineral dust in the Community Atmosphere Model and assessing the impacts of the improvements in terms of direct effects on the radiative balance of the atmosphere. We simulated the dust cycle using different parameterization sets for dust emission, size distribution, and optical properties. Comparing the results of these simulations with observations of concentration, deposition, and aerosol optical depth allows us to refine the representation of the dust cycle and its climate impacts. We propose a tuning method for dust parameterizations to allow the dust module to work across the wide variety of parameter settings which can be used within the Community Atmosphere Model. Our results include a better representation of the dust cycle, most notably for the improved size distribution. The estimated net top of atmosphere direct dust radiative forcing is -0.23 ± 0.14 W/m2 for present day and -0.32 ± 0.20 W/m2 at the Last Glacial Maximum. From our study and sensitivity tests, we also derive some general relevant findings, supporting the concept that the magnitude of the modeled dust cycle is sensitive to the observational data sets and size distribution chosen to constrain the model as well as the meteorological forcing data, even within the same modeling framework, and that the direct radiative forcing of dust is strongly sensitive to the optical properties and size distribution used.

  3. Hydrogeological modeling for improving groundwater monitoring network and strategies (United States)

    Thakur, Jay Krishna


    The research aimed to investigate a new approach for spatiotemporal groundwater monitoring network optimization using hydrogeological modeling to improve monitoring strategies. Unmonitored concentrations were incorporated at different potential monitoring locations into the groundwater monitoring optimization method. The proposed method was applied in the contaminated megasite, Bitterfeld/Wolfen, Germany. Based on an existing 3-D geological model, 3-D groundwater flow was obtained from flow velocity simulation using initial and boundary conditions. The 3-D groundwater transport model was used to simulate transport of α-HCH with an initial ideal concentration of 100 mg/L injected at various hydrogeological layers in the model. Particle tracking for contaminant and groundwater flow velocity realizations were made. The spatial optimization result suggested that 30 out of 462 wells in the Quaternary aquifer (6.49 %) and 14 out of 357 wells in the Tertiary aquifer (3.92 %) were redundant. With a gradual increase in the width of the particle track path line, from 0 to 100 m, the number of redundant wells remarkably increased, in both aquifers. The results of temporal optimization showed different sampling frequencies for monitoring wells. The groundwater and contaminant flow direction resulting from particle tracks obtained from hydrogeological modeling was verified by the variogram modeling through α-HCH data from 2003 to 2009. Groundwater monitoring strategies can be substantially improved by removing the existing spatio-temporal redundancy as well as incorporating unmonitored network along with sampling at recommended interval of time. However, the use of this model-based method is only recommended in the areas along with site-specific experts' knowledge.

  4. Improving ammonia emissions in air quality modelling for France (United States)

    Hamaoui-Laguel, Lynda; Meleux, Frédérik; Beekmann, Matthias; Bessagnet, Bertrand; Génermont, Sophie; Cellier, Pierre; Létinois, Laurent


    We have implemented a new module to improve the representation of ammonia emissions from agricultural activities in France with the objective to evaluate the impact of such emissions on the formation of particulate matter modelled with the air quality model CHIMERE. A novel method has been set up for the part of ammonia emissions originating from mineral fertilizer spreading. They are calculated using the one dimensional 1D mechanistic model “VOLT'AIR” which has been coupled with data on agricultural practices, meteorology and soil properties obtained at high spatial resolution (cantonal level). These emissions display high spatiotemporal variations depending on soil pH, rates and dates of fertilization and meteorological variables, especially soil temperature. The emissions from other agricultural sources (animal housing, manure storage and organic manure spreading) are calculated using the national spatialised inventory (INS) recently developed in France. The comparison of the total ammonia emissions estimated with the new approach VOLT'AIR_INS with the standard emissions provided by EMEP (European Monitoring and Evaluation Programme) used currently in the CHIMERE model shows significant differences in the spatiotemporal distributions. The implementation of new ammonia emissions in the CHIMERE model has a limited impact on ammonium nitrate aerosol concentrations which only increase at most by 10% on the average for the considered spring period but this impact can be more significant for specific pollution episodes. The comparison of modelled PM10 (particulate matter with aerodynamic diameter smaller than 10 μm) and ammonium nitrate aerosol with observations shows that the use of the new ammonia emission method slightly improves the spatiotemporal correlation in certain regions and reduces the negative bias on average by 1 μg m-3. The formation of ammonium nitrate aerosol depends not only on ammonia concentrations but also on nitric acid availability, which

  5. Improvement of a 2D numerical model of lava flows (United States)

    Ishimine, Y.


    I propose an improved procedure that reduces an improper dependence of lava flow directions on the orientation of Digital Elevation Model (DEM) in two-dimensional simulations based on Ishihara et al. (in Lava Flows and Domes, Fink, JH eds., 1990). The numerical model for lava flow simulations proposed by Ishihara et al. (1990) is based on two-dimensional shallow water model combined with a constitutive equation for a Bingham fluid. It is simple but useful because it properly reproduces distributions of actual lava flows. Thus, it has been regarded as one of pioneer work of numerical simulations of lava flows and it is still now widely used in practical hazard prediction map for civil defense officials in Japan. However, the model include an improper dependence of lava flow directions on the orientation of DEM because the model separately assigns the condition for the lava flow to stop due to yield stress for each of two orthogonal axes of rectangular calculating grid based on DEM. This procedure brings a diamond-shaped distribution as shown in Fig. 1 when calculating a lava flow supplied from a point source on a virtual flat plane although the distribution should be circle-shaped. To improve the drawback, I proposed a modified procedure that uses the absolute value of yield stress derived from both components of two orthogonal directions of the slope steepness to assign the condition for lava flows to stop. This brings a better result as shown in Fig. 2. Fig. 1. (a) Contour plots calculated with the original model of Ishihara et al. (1990). (b) Contour plots calculated with a proposed model.

  6. Nicaraguan and US nursing collaborative evaluation study: Identifying similarities and differences between US and Nicaraguan curricula and teaching modalities using the community engagement model. (United States)

    Lake, Donna; Engelke, Martha K; Kosko, Debra A; Roberson, Donna W; Jaime, Joba Fany; López, Feliciana Rojas; Rivas, Fidelia Mercedes Poveda; Salazar, Yolanda Matute; Salmeron, Juana Julia


    Curricula evaluation is an essential phase of curriculum development. Study describes the implementation of a formative evaluation used by faculty members between Universidad Nacional Autonóma de Nicaragua (UNAN-Leon) Escuela de Enfermeriá, Nicaragua and East Carolina University College of Nursing (ECU CON) in North Carolina, US. Program evaluation study to conduct an assessment, comparison of a medical-surgical adult curriculum and teaching modalities. Also, explore the Community Engagement (CE) Model to build a Central American-US faculty partnership. Methodological evaluation study utilizing a newly developed International Nursing Education Curriculum Evaluation Tool related to adult medical and surgical nursing standards. Also, the CE Model was tested as a facilitation tool in building partnerships between nurse educators. Nicaragua and US nursing faculty teams constructed the curriculum evaluation by utilizing the International Nursing Education Curriculum Evaluation Tool (INECET) by reviewing 57 elements covering 6 Domains related to adult medical and surgical nursing standards. Developed, explored the utilization of the INECET based on a standard of practice framework. The Community Engagement Model, a fivephase cycle, Inform, Consult, Involve, Collaborate, and Empower was utilized to facilitate the collaborative process. Similarities between the US and Nicaraguan curricula and teaching modalities were reflective based on the 57 elements covering 6 Domain assessment tool. Case studies, lecture, and clinical hospital rotations were utilized as teaching modalities. Both schools lacked sufficient time for clinical practicum time. The differences, included UNAN-Leon had a lack of simulation skill lab, equipment, and space, whereas ECU CON had sufficient resources. The ECU school lacked applied case studies from a rural health medical-surgical adult nursing perspective and less time in rural health clinics. The UNAN-Leon nursing standards generalized based on

  7. Renewing the Respect for Similarity

    Directory of Open Access Journals (Sweden)

    Shimon eEdelman


    Full Text Available In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemmingfrom its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problemat hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, bysurveying established results and new developments in the theory and methods of similarity-preservingassociative lookup and dimensionality reduction — critical components of many cognitive functions, aswell as of intelligent data management in computer vision. We focus in particular on the growing familyof algorithms that support associative memory by performing hashing that respects local similarity, andon the uses of similarity in representing structured objects and scenes. Insofar as these similarity-basedideas and methods are useful in cognitive modeling and in AI applications, they should be included inthe core conceptual toolkit of computational neuroscience.

  8. Vitis labrusca extract effects on cellular dynamics and redox modulations in a SH-SY5Y neuronal cell model: a similar role to lithium. (United States)

    Scola, Gustavo; Laliberte, Victoria Louise Marina; Kim, Helena Kyunghee; Pinguelo, Arsene; Salvador, Mirian; Young, L Trevor; Andreazza, Ana Cristina


    Oxidative stress and calcium imbalance are consistently reported in bipolar disorder (BD). Polymorphism of voltage-dependent calcium channel, L type, alpha 1C subunit (CACNA1c), which is responsible for the regulation of calcium influx, was also shown to have a strong association with BD. These alterations can lead to a number of different consequences in the cell including production of reactive species causing oxidative damage to proteins, lipids and DNA. Lithium is the most frequent medication used for the treatment of BD. Despite lithium's effects, long-term use can result in many negative side effects. Therefore, there is an urgent need for the development of drugs that may have similar biological effects as lithium without the negative consequences. Moreover, polyphenols are secondary metabolites of plants that present multi-faceted molecular abilities, such as regulation of cellular responses. Vitis labrusca extract (VLE), a complex mixture of polyphenols obtained from seeds of winery wastes of V. labrusca, was previously characterized by our group. This extract presented powerful antioxidant and neuroprotective properties. Therefore, the ability of VLE to ameliorate the consequences of hydrogen peroxide (H2O2)-induced redox alterations to cell viability, intracellular calcium levels and the relative levels of the calcium channel CACNA1c in comparison to lithium's effects were evaluated using a neuroblastoma cell model. H2O2 treatment increased cell mortality through apoptotic and necrotic pathways leading to an increase in intracellular calcium levels and alterations to relative CACNA1c levels. VLE and lithium were found to similarly ameliorate cell mortality through regulation of the apoptotic/necrotic pathways, decreasing intracellular calcium levels and preventing alterations to the relative levels of CACNA1c. The findings of this study suggest that VLE exhibits protective properties against oxidative stress-induced alterations similar to that of lithium

  9. Strong similarities in the creep and damage behaviour of a synthetic bone model compared to human trabecular bone under compressive cyclic loading. (United States)

    Purcell, Philip; Tiernan, Stephen; McEvoy, Fiona; Morris, Seamus


    Understanding the failure modes which instigate vertebral collapse requires the determination of trabecular bone fatigue properties, since many of these fractures are observed clinically without any preceding overload event. Alternatives to biological bone tissue for in-vitro fatigue studies are available in the form of commercially available open cell polyurethane foams. These test surrogates offer particular advantages compared to biological tissue such as a controllable architecture and greater uniformity. The present study provides a critical evaluation of these models as a surrogate to human trabecular bone tissue for the study of vertebral augmentation treatments such as balloon kyphoplasty. The results of this study show that while statistically significant differences were observed for the damage response of the two materials, both share a similar three phase modulus reduction over their life span with complete failure rapidly ensuing at damage levels above 30%. No significant differences were observed for creep accumulation properties, with greater than 50% of creep strains being accumulated during the first quarter of the life span for both materials. A significant power law relationship was identified between damage accumulation rate and cycles to failure for the synthetic bone model along with comparable microarchitectural features and a hierarchical composite structure consistent with biological bone. These findings illustrate that synthetic bone models offer potential as a surrogate for trabecular bone to an extent that warrants a full validation study to define boundaries of use which compliment traditional tests using biological bone. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Alterations in endo-lysosomal function induce similar hepatic lipid profiles in rodent models of drug-induced phospholipidosis and Sandhoff disease. (United States)

    Lecommandeur, Emmanuelle; Baker, David; Cox, Timothy M; Nicholls, Andrew W; Griffin, Julian L


    Drug-induced phospholipidosis (DIPL) is characterized by an increase in the phospholipid content of the cell and the accumulation of drugs and lipids inside the lysosomes of affected tissues, including in the liver. Although of uncertain pathological significance for patients, the condition remains a major impediment for the clinical development of new drugs. Human Sandhoff disease (SD) is caused by inherited defects of the β subunit of lysosomal β-hexosaminidases (Hex) A and B, leading to a large array of symptoms, including neurodegeneration and ultimately death by the age of 4 in its most common form. The substrates of Hex A and B, gangliosides GM2 and GA2, accumulate inside the lysosomes of the CNS and in peripheral organs. Given that both DIPL and SD are associated with lysosomes and lipid metabolism in general, we measured the hepatic lipid profiles in rodent models of these two conditions using untargeted LC/MS to examine potential commonalities. Both model systems shared a number of perturbed lipid pathways, notably those involving metabolism of cholesteryl esters, lysophosphatidylcholines, bis(monoacylglycero)phosphates, and ceramides. We report here profound alterations in lipid metabolism in the SD liver. In addition, DIPL induced a wide range of lipid changes not previously observed in the liver, highlighting similarities with those detected in the model of SD and raising concerns that these lipid changes may be associated with underlying pathology associated with lysosomal storage disorders. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.

  11. Renin-angiotensin system transgenic mouse model recapitulates pathophysiology similar to human preeclampsia with renal injury that may be mediated through VEGF. (United States)

    Denney, J Morgan; Bird, Cynthia; Gendron-Fitzpatrick, Annette; Sampene, Emmanuel; Bird, Ian M; Shah, Dinesh M


    Using a transgenic cross, we evaluated features of preeclampsia, renal injury and the sFlt1/VEGF changes. Transgenic hAGT and hREN, or wild-type (WT) C57Bl/6 mice were cross-bred: female hAGT × male hREN for preeclampsia (PRE) model and female WT × male WT for pregnant controls (WTP). Samples were collected for plasma VEGF, sFlt1, and urine albumin. Blood pressures (BP) were monitored by telemetry. Vascular reactivity was investigated by wire myography. Kidneys and placenta were immunostained for sFlt1 and VEGF. Eleven PRE and 9 WTP mice were compared. PRE more frequently demonstrated albuminuria, glomerular endotheliosis (80% vs. 11%; P = 0.02), and placental necrosis (60% vs. 0%; P preeclampsia recapitulates human preeclamptic state with high fidelity, and that, vascular adaptation to pregnancy is suggested by declining BPs and reduced vascular response to PE and increased response to acetylcholine. Placental damage with resultant increased release of sFlt1, proteinuria, deficient spiral artery remodeling, and glomerular endotheliosis were observed in this model of PRE. Increased VEGF binding to glomerular endothelial cells in this model of PRE is similar to human PRE and leads us to hypothesize that renal injury in preeclampsia may be mediated through local VEGF. Copyright © 2017 the American Physiological Society.

  12. Improving NASA's Multiscale Modeling Framework for Tropical Cyclone Climate Study (United States)

    Shen, Bo-Wen; Nelson, Bron; Cheung, Samson; Tao, Wei-Kuo


    One of the current challenges in tropical cyclone (TC) research is how to improve our understanding of TC interannual variability and the impact of climate change on TCs. Recent advances in global modeling, visualization, and supercomputing technologies at NASA show potential for such studies. In this article, the authors discuss recent scalability improvement to the multiscale modeling framework (MMF) that makes it feasible to perform long-term TC-resolving simulations. The MMF consists of the finite-volume general circulation model (fvGCM), supplemented by a copy of the Goddard cumulus ensemble model (GCE) at each of the fvGCM grid points, giving 13,104 GCE copies. The original fvGCM implementation has a 1D data decomposition; the revised MMF implementation retains the 1D decomposition for most of the code, but uses a 2D decomposition for the massive copies of GCEs. Because the vast majority of computation time in the MMF is spent computing the GCEs, this approach can achieve excellent speedup without incurring the cost of modifying the entire code. Intelligent process mapping allows differing numbers of processes to be assigned to each domain for load balancing. The revised parallel implementation shows highly promising scalability, obtaining a nearly 80-fold speedup by increasing the number of cores from 30 to 3,335.

  13. Managing health care decisions and improvement through simulation modeling. (United States)

    Forsberg, Helena Hvitfeldt; Aronsson, Håkan; Keller, Christina; Lindblad, Staffan


    Simulation modeling is a way to test changes in a computerized environment to give ideas for improvements before implementation. This article reviews research literature on simulation modeling as support for health care decision making. The aim is to investigate the experience and potential value of such decision support and quality of articles retrieved. A literature search was conducted, and the selection criteria yielded 59 articles derived from diverse applications and methods. Most met the stated research-quality criteria. This review identified how simulation can facilitate decision making and that it may induce learning. Furthermore, simulation offers immediate feedback about proposed changes, allows analysis of scenarios, and promotes communication on building a shared system view and understanding of how a complex system works. However, only 14 of the 59 articles reported on implementation experiences, including how decision making was supported. On the basis of these articles, we proposed steps essential for the success of simulation projects, not just in the computer, but also in clinical reality. We also presented a novel concept combining simulation modeling with the established plan-do-study-act cycle for improvement. Future scientific inquiries concerning implementation, impact, and the value for health care management are needed to realize the full potential of simulation modeling.

  14. The improved sequential puff model for atmospheric dispersion evaluation (SPADE)

    International Nuclear Information System (INIS)

    Desiato, F.


    The present report describes the improved version of the Sequential Puff for Atmospheric Dispersion Evaluation Model (SPADE), developed at EKEA-DISP as a component of ARIES (Atmospheric Release Impact Evaluation System). SPADE has been originally designed for real time assessment of the consequences of a nuclear release into the atmosphere, but it is also suited for sensitivity studies, investigations, or routine applications. It can estimate ground-level air concentrations, deposition and cloud γ dose rate in flat or gently rolling terrain in the vicinity of a point source. During the last years several aspects of the modelling of dispersion processes have been improved, and new modules have been implemented in SPADE. In the first part of the report, a general description of the model is given, and the assumptions and parameterizations used to simulate the main physical processes are described. The second part concerns with the structure of the computer code and of input and output files, and can be regarded as a user's guide to the model. (author)

  15. Do number of days with low back pain and patterns of episodes of pain have similar outcomes in a biopsychosocial prediction model?

    DEFF Research Database (Denmark)

    Lemeunier, N; Leboeuf-Yde, C; Gagey, O


    are similar, regardless which of the two classifications is used. METHOD: During 1 year, 49- or 50-year-old people from the Danish general population were sent fortnightly automated text messages (SMS-Track) asking them if they had any LBP in the past fortnight. Responses for the whole year were......PURPOSES: We used two different methods to classify low back pain (LBP) in the general population (1) to assess the overlapping of individuals within the different subgroups in those two classifications, (2) to explore if the associations between LBP and some selected bio-psychosocial factors...... with a questionnaire at baseline 9 years earlier, were entered into regression models to investigate their associations with the subgroups of the two classifications of LBP and the results compared. RESULTS: The percentage of agreement between categories of the two classification systems was above 68 % (Kappa 0...

  16. Improved stoves in India: A study of sustainable business models

    International Nuclear Information System (INIS)

    Shrimali, Gireesh; Slaski, Xander; Thurber, Mark C.; Zerriffi, Hisham


    Burning of biomass for cooking is associated with health problems and climate change impacts. Many previous efforts to disseminate improved stoves – primarily by governments and NGOs – have not been successful. Based on interviews with 12 organizations selling improved biomass stoves, we assess the results to date and future prospects of commercial stove operations in India. Specifically, we consider how the ability of these businesses to achieve scale and become self-sustaining has been influenced by six elements of their respective business models: design, customers targeted, financing, marketing, channel strategy, and organizational characteristics. The two companies with the most stoves in the field shared in common generous enterprise financing, a sophisticated approach to developing a sales channel, and many person-years of management experience in marketing and operations. And yet the financial sustainability of improved stove sales to households remains far from assured. The only company in our sample with demonstrated profitability is a family-owned business selling to commercial rather than household customers. The stove sales leader is itself now turning to the commercial segment to maintain flagging cash flow, casting doubt on the likelihood of large positive impacts on health from sales to households in the near term. - Highlights: ► Business models to sell improved stoves can be viable in India. ► Commercial stove efforts may not be able to deliver all the benefits hoped for. ► The government could play a useful role if policies are targeted and well thought-out. ► Develops models for that hard-to-define entity mixing business and charity.

  17. Dialect Topic Modeling for Improved Consumer Medical Search

    Energy Technology Data Exchange (ETDEWEB)

    Crain, Steven P. [Georgia Institute of Technology; Yang, Shuang-Hong [Georgia Institute of Technology; Zha, Hongyuan [Georgia Institute of Technology; Jiao, Yu [ORNL


    Access to health information by consumers is ham- pered by a fundamental language gap. Current attempts to close the gap leverage consumer oriented health information, which does not, however, have good coverage of slang medical terminology. In this paper, we present a Bayesian model to automatically align documents with different dialects (slang, com- mon and technical) while extracting their semantic topics. The proposed diaTM model enables effective information retrieval, even when the query contains slang words, by explicitly modeling the mixtures of dialects in documents and the joint influence of dialects and topics on word selection. Simulations us- ing consumer questions to retrieve medical information from a corpus of medical documents show that diaTM achieves a 25% improvement in information retrieval relevance by nDCG@5 over an LDA baseline.

  18. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit


    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  19. Life course models: improving interpretation by consideration of total effects. (United States)

    Green, Michael J; Popham, Frank


    Life course epidemiology has used models of accumulation and critical or sensitive periods to examine the importance of exposure timing in disease aetiology. These models are usually used to describe the direct effects of exposures over the life course. In comparison with consideration of direct effects only, we show how consideration of total effects improves interpretation of these models, giving clearer notions of when it will be most effective to intervene. We show how life course variation in the total effects depends on the magnitude of the direct effects and the stability of the exposure. We discuss interpretation in terms of total, direct and indirect effects and highlight the causal assumptions required for conclusions as to the most effective timing of interventions. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association.

  20. Improvements and new features in the IRI-2000 model

    International Nuclear Information System (INIS)

    Bilitza, D.


    This paper describes the changes that were implemented in the new version of the COSPAR/URSI International Reference Ionosphere (IRI-2000). These changes are: (1) two new options for the electron density in the D-region, (2) a better functional description of the electron density in the E-F merging region, (3) inclusion of the F1 layer occurrence probability as a new parameter, (4) a new model for the bottomside parameters B 0 and B 1 that greatly improves the representation at low and equatorial latitudes during high solar activities, (5) inclusion of a model for foF2 storm-time updating, (6) a new option for the electron temperature in the topside ionosphere, and (7) inclusion of a model for the equatorial F region ion drift. The main purpose of this paper is to provide the IRI users with examples of the effects of these changes. (author)

  1. Evaluation of remifentanil sevoflurane response surface models in patients emerging from anesthesia: Model improvement using effect-site sevoflurane concentrations (United States)

    Johnson, Ken B.; Syroid, Noah D.; Gupta, Dhanesh K.; Manyam, Sandeep C.; Pace, Nathan L.; LaPierre, Cris D.; Egan, Talmage D.; White, Julia L.; Tyler, Diane; Westenskow, Dwayne R.


    Introduction We previously reported models that characterized the synergistic interaction between remifentanil and sevoflurane in blunting responses to verbal and painful stimuli. This preliminary study evaluated the ability of these models to predict a return of responsiveness (ROR) during emergence from anesthesia and a response to tibial pressure when patients required analgesics in the recovery room. We hypothesized that model predictions would be consistent with observed responses. We also hypothesized that under non steady state conditions, accounting for the lag time between effect site (Ce) and end tidal (ET) sevoflurane concentrations would improve predictions. Methods Twenty patients received a sevoflurane, remifentanil, and fentanyl anesthetic. Two model predictions of responsiveness were recorded at emergence: an ET based and a Ce based prediction. Similarly two predictions of a response to noxious stimuli were recorded when patients first required analgesics in the recovery room. Model predictions were compared to observations with graphical and temporal analyses. Results While patients were anesthetized, model predictions indicated a high likelihood that patients would be unresponsive (≥ 99%). However, following termination of the anesthetic, models exhibited a wide range of predictions at emergence (1% to 97%). Although wide, the Ce based predictions of responsiveness were better distributed over a percentage ranking of observations than the ET based predictions. For the ET based model, 45% of the patients awoke within 2 minutes of the 50% model predicted probability of unresponsiveness; 65% awoke within 4 minutes. For the Ce based model, 45% of the patients awoke within 1 minute of the 50% model predicted probability of unresponsiveness; 85% awoke within 3.2 minutes. Predictions of a response to a painful stimulus in the recovery room were similar for the Ce and ET based models. Discussion Results confirmed in part our study hypothesis; accounting

  2. Avian-Pathogenic Escherichia coli Strains Are Similar to Neonatal Meningitis E. coli Strains and Are Able To Cause Meningitis in the Rat Model of Human Disease ▿ (United States)

    Tivendale, Kelly A.; Logue, Catherine M.; Kariyawasam, Subhashinie; Jordan, Dianna; Hussein, Ashraf; Li, Ganwu; Wannemuehler, Yvonne; Nolan, Lisa K.


    Escherichia coli strains causing avian colibacillosis and human neonatal meningitis, urinary tract infections, and septicemia are collectively known as extraintestinal pathogenic E. coli (ExPEC). Characterization of ExPEC strains using various typing techniques has shown that they harbor many similarities, despite their isolation from different host species, leading to the hypothesis that ExPEC may have zoonotic potential. The present study examined a subset of ExPEC strains: neonatal meningitis E. coli (NMEC) strains and avian-pathogenic E. coli (APEC) strains belonging to the O18 serogroup. The study found that they were not easily differentiated on the basis of multilocus sequence typing, phylogenetic typing, or carriage of large virulence plasmids. Among the APEC strains examined, one strain was found to be an outlier, based on the results of these typing methods, and demonstrated reduced virulence in murine and avian pathogenicity models. Some of the APEC strains tested in a rat model of human neonatal meningitis were able to cause meningitis, demonstrating APEC's ability to cause disease in mammals, lending support to the hypothesis that APEC strains have zoonotic potential. In addition, some NMEC strains were able to cause avian colisepticemia, providing further support for this hypothesis. However, not all of the NMEC and APEC strains tested were able to cause disease in avian and murine hosts, despite the apparent similarities in their known virulence attributes. Thus, it appears that a subset of NMEC and APEC strains harbors zoonotic potential, while other strains do not, suggesting that unknown mechanisms underlie host specificity in some ExPEC strains. PMID:20515929

  3. Crop Model Improvement Reduces the Uncertainty of the Response to Temperature of Multi-Model Ensembles (United States)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli


    To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.

  4. Thermal Modeling Method Improvements for SAGE III on ISS (United States)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn


    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was

  5. Similarity transformations of MAPs

    Directory of Open Access Journals (Sweden)

    Andersen Allan T.


    Full Text Available We introduce the notion of similar Markovian Arrival Processes (MAPs and show that the event stationary point processes related to two similar MAPs are stochastically equivalent. This holds true for the time stationary point processes too. We show that several well known stochastical equivalences as e.g. that between the H 2 renewal process and the Interrupted Poisson Process (IPP can be expressed by the similarity transformations of MAPs. In the appendix the valid region of similarity transformations for two-state MAPs is characterized.

  6. Can explicit convection improve modeled dust in summertime West Africa? (United States)

    Roberts, A. J.; Woodage, M. J.; Marsham, J. H.; Highwood, E. J.; Ryder, C. L.; McGinty, W.; Crook, J. A.


    Global and regional models have large errors in modeled dust fields over West Africa. Parameterized moist convection in models gives a very poor representation of haboobs (an important dust uplift mechanism). This is true for climate models, numerical weather prediction and even reanalyses. Recent work on near-surface winds from the Fennec and AMMA field campaigns has shown that analyzed winds (ERA-Interim) require improvement to represent key mechanisms that lift dust. Specifically there is: (1) a deficit of occurrence of rare high wind speed events, (2) an under-representation of diurnal and seasonal variability, and (3) poor correlation between observed and analyzed winds during the West African Monsoon season, even in regions far from the northern edge of the monsoon flow. Here, we test the hypothesis that explicit convection improves haboob winds and reduces errors in modeled dust fields. This study compares satellite AOD retrievals and surface wind observations with a suite of five-month, large-domain simulations with prognostic dust over the Sahel and Sahara. The results show that despite varying both grid-spacing and the representation of moist convection there are only minor changes in dust metrics. In all simulations there is an AOD deficit over the observed central Saharan dust maximum and a high bias in AOD along the west coast: both features are consistent with climate models (CMIP5). Cold pools are present in simulations with explicit convection leading to an improved diurnal cycle in dust-generating winds. However, this does not change the AOD field significantly because: (1) the evening haboob peak is offset by a reduction in strength of the nocturnal low level jet, (2) simulated haboobs are weaker and less frequent than observed, especially close to the observed summertime Saharan dust maximum, and (3) Sahelian cold pools (that raise dust in reality), do not raise dust in the simulations due to a seasonally constant bare soil fraction and soil

  7. The role of cultural models in local perceptions of SFM--differences and similarities of interest groups from three boreal regions. (United States)

    Berninger, Kati; Kneeshaw, Daniel; Messier, Christian


    Differences in the way local and regional interest groups perceive Sustainable Forest Management in regions with different forest use histories were studied using Southeastern Finland, the Mauricie in Quebec and Central Labrador in Canada as examples of regions with high, medium and low importance of commercial forestry. We present a conceptual model illustrating the cyclic interaction between the forest, cultural models about forests and forest management. We hypothesized that peoples' perceptions would be influenced by their cultural models about forests and would thus vary amongst regions with different forest use histories and among different interest groups. The weightings of the environmental, economic and social components of sustainability as well as themes important for each of the interest groups were elicited using individual listing of SFM indicators and group work aimed at developing a consensus opinion on a common indicator list. In Southeastern Finland the views of the different groups were polarized along the environment-economy axis, whereas in Central Labrador all groups were environmentally oriented. The social dimension was low overall except among the Metis and the Innu in Labrador. Only environmental groups were similar in all three research regions, the largest differences between regions were found among the forestry professionals in their weightings concerning economy and nature. As the importance of commercial forestry increased, a greater importance of economic issues was expressed whereas the opposite trend was observed for issues regarding nature. Also inter-group differences grew as the importance of commercial forestry increased in the region. Forest management and forest use can be seen as factors strongly influencing peoples' cultural models on forests.

  8. Estimating the surface layer refractive index structure constant over snow and sea ice using Monin-Obukhov similarity theory with a mesoscale atmospheric model. (United States)

    Qing, Chun; Wu, Xiaoqing; Huang, Honghua; Tian, Qiguo; Zhu, Wenyue; Rao, Ruizhong; Li, Xuebin


    Since systematic direct measurements of refractive index structure constant ( Cn2) for many climates and seasons are not available, an indirect approach is developed in which Cn2 is estimated from the mesoscale atmospheric model outputs. In previous work, we have presented an approach that a state-of-the-art mesoscale atmospheric model called Weather Research and Forecasting (WRF) model coupled with Monin-Obukhov Similarity (MOS) theory which can be used to estimate surface layer Cn2 over the ocean. Here this paper is focused on surface layer Cn2 over snow and sea ice, which is the extending of estimating surface layer Cn2 utilizing WRF model for ground-based optical application requirements. This powerful approach is validated against the corresponding 9-day Cn2 data from a field campaign of the 30th Chinese National Antarctic Research Expedition (CHINARE). We employ several statistical operators to assess how this approach performs. Besides, we present an independent analysis of this approach performance using the contingency tables. Such a method permits us to provide supplementary key information with respect to statistical operators. These methods make our analysis more robust and permit us to confirm the excellent performances of this approach. The reasonably good agreement in trend and magnitude is found between estimated values and measurements overall, and the estimated Cn2 values are even better than the ones obtained by this approach over the ocean surface layer. The encouraging performance of this approach has a concrete practical implementation of ground-based optical applications over snow and sea ice.

  9. Improving patient handover between teams using a business improvement model: PDSA cycle. (United States)

    Luther, Vishal; Hammersley, Daniel; Chekairi, Ahmed


    Medical admission units are continuously under pressure to move patients off the unit to outlying medical wards and allow for new admissions. In a typical district general hospital, doctors working in these medical wards reported that, on average, three patients each week arrived from the medical admission unit before any handover was received, and a further two patients arrived without any handover at all. A quality improvement project was therefore conducted using a 'Plan, Do, Study, Act' cycle model for improvement to address this issue. P - Plan: as there was no framework to support doctors with handover, a series of standard handover procedures were designed. D - Do: the procedures were disseminated to all staff, and championed by key stakeholders, including the clinical director and matron of the medical admission unit. S - STUDY: Measurements were repeated 3 months later and showed no change in the primary end points. A - ACT: The post take ward round sheet was redesigned, creating a checkbox for a medical admission unit doctor to document that handover had occurred. Nursing staff were prohibited from moving the patient off the ward until this had been completed. This later evolved into a separate handover sheet. Six months later, a repeat study revealed that only one patient each week was arriving before or without a verbal handover. Using a 'Plan, Do, Study, Act' business improvement tool helped to improve patient care.

  10. Improved Dynamic Modeling of the Cascade Distillation Subsystem and Integration with Models of Other Water Recovery Subsystems (United States)

    Perry, Bruce; Anderson, Molly


    The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.

  11. Policy modeling for energy efficiency improvement in US industry

    International Nuclear Information System (INIS)

    Worrell, Ernst; Price, Lynn; Ruth, Michael


    We are at the beginning of a process of evaluating and modeling the contribution of policies to improve energy efficiency. Three recent policy studies trying to assess the impact of energy efficiency policies in the United States are reviewed. The studies represent an important step in the analysis of climate change mitigation strategies. All studies model the estimated policy impact, rather than the policy itself. Often the policy impacts are based on assumptions, as the effects of a policy are not certain. Most models only incorporate economic (or price) tools, which recent studies have proven to be insufficient to estimate the impacts, costs and benefits of mitigation strategies. The reviewed studies are a first effort to capture the effects of non-price policies. The studies contribute to a better understanding of the role of policies in improving energy efficiency and mitigating climate change. All policy scenarios results in substantial energy savings compared to the baseline scenario used, as well as substantial net benefits to the U.S. economy

  12. Improved intra-species collision models for PIC simulations

    International Nuclear Information System (INIS)

    Jones, M.E.; Lemons, D.S.; Winske, D.


    In recent years, the authors have investigated methods to improve the effectiveness of modeling collisional processes in particle-in-cell codes. Through the use of generalized collision models, plasma dynamics can be followed both in the regime of nearly collisionless plasmas as well as in the hydrodynamic limit of collisional plasmas. They have developed a collision-field method to treat both the case of collisions between unlike plasma species (inter-species collisions), through the use of a deterministic, grid-based force, and between particles of the same species (intra-species collisions), through the use of a Langevin equation. While the approach used for inter-species collisions is noise-free in that the collision experienced by a particle does not require any random numbers, such random numbers are used for intra-species collisions. This gives rise to a stochastic cooling effect inherent in the Langevin approach. In this paper, the authors concentrate on intra-species collisions and describe how the accuracy of the model can be improved by appropriate corrections to velocity and spatial moments

  13. An improved LTE model of a high pressure sulfur discharge

    Energy Technology Data Exchange (ETDEWEB)

    Johnston, C W; Heijden, H W P van der; Hartgers, A; Garloff, K; Dijk, J van; Mullen, J J A M van der [Department of Applied Physics, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven (Netherlands)


    An existing LTE model (Johnston C W et al 2002 J. Phys. D: Appl. Phys. 35 342) of a high pressure sulfur discharge is improved upon by more accurate and complete treatment of each term in the energy balance. The simulation program PLASIMO (Janssen G M et al 1999 Plasma Sources Sci. Technol. 8 1, van Dijk J 2001 Modelling of plasma light sources: an object-oriented approach PhD Thesis Eindhoven University of Technology, The Netherlands, ISBN 90-386-1819-0), which is an integrated environment for construction and execution of plasma models, has been used to define and solve all aspects of the model. The electric field is treated as being dc, and the temperature dependent nature of species interactions is incorporated in determination of transport coefficients. In addition to the main radiative transition, B3{sup {sigma}}{sub g}{sup -}, several others in S{sub 2} are included. These are B''3{sup {pi}}{sub u} {yields} X3{sup {sigma}}{sub g}{sup -}, B'3{sup {pi}}{sub g} {yields} {l_brace}A3{sup {sigma}}{sub u}{sup +}, A'3{sup {delta}}{sub u}{r_brace} and e1{sup {pi}}{sub g} {yields} c1{sup {sigma}}{sub u}{sup -}. The S{sub 3} molecule is also included in the composition as an absorbing particle. Furthermore, radiation production is treated quantum mechanically. The principle improvement over the previous work is that both the position of the spectral maximum and the pressure shift are quantitatively described by the current model. Both are chiefly due to the presence of S{sub 3}.

  14. Similarity Measure of Graphs

    Directory of Open Access Journals (Sweden)

    Amine Labriji


    Full Text Available The topic of identifying the similarity of graphs was considered as highly recommended research field in the Web semantic, artificial intelligence, the shape recognition and information research. One of the fundamental problems of graph databases is finding similar graphs to a graph query. Existing approaches dealing with this problem are usually based on the nodes and arcs of the two graphs, regardless of parental semantic links. For instance, a common connection is not identified as being part of the similarity of two graphs in cases like two graphs without common concepts, the measure of similarity based on the union of two graphs, or the one based on the notion of maximum common sub-graph (SCM, or the distance of edition of graphs. This leads to an inadequate situation in the context of information research. To overcome this problem, we suggest a new measure of similarity between graphs, based on the similarity measure of Wu and Palmer. We have shown that this new measure satisfies the properties of a measure of similarities and we applied this new measure on examples. The results show that our measure provides a run time with a gain of time compared to existing approaches. In addition, we compared the relevance of the similarity values obtained, it appears that this new graphs measure is advantageous and  offers a contribution to solving the problem mentioned above.

  15. New Similarity Functions

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina


    In data science, there are important parameters that affect the accuracy of the algorithms used. Some of these parameters are: the type of data objects, the membership assignments, and distance or similarity functions. This paper discusses similarity functions as fundamental elements in membership...

  16. Similarity after Goodman

    NARCIS (Netherlands)

    Decock, L.B.; Douven, I.


    In a famous critique, Goodman dismissed similarity as a slippery and both philosophically and scientifically useless notion. We revisit his critique in the light of important recent work on similarity in psychology and cognitive science. Specifically, we use Tversky's influential set-theoretic

  17. Judgments of brand similarity

    NARCIS (Netherlands)

    Bijmolt, THA; Wedel, M; Pieters, RGM; DeSarbo, WS

    This paper provides empirical insight into the way consumers make pairwise similarity judgments between brands, and how familiarity with the brands, serial position of the pair in a sequence, and the presentation format affect these judgments. Within the similarity judgment process both the

  18. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.


    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  19. An improved finite element model for craniofacial surgery simulation. (United States)

    Wang, Shengzheng; Yang, Jie


    A novel approach is proposed for simulating the deformation of the facial soft tissues in the craniofacial surgery simulation. A nonlinear finite mixed-element model (NFM-EM) based on solid-shell elements and Lagrange principle of virtual work is proposed, which addresses the heterogeneity in geometry and material properties found in the soft tissues of the face. Moreover, after the investigation of the strain-potential models, the biomechanical characteristics of skin, muscles and fat are modeled with the most suitable material properties. In addition, an improved contact algorithm is used to compute the boundary conditions of the soft tissue model. The quantitative validation and the comparative results with other models proved the effectiveness of the approach on the simulation of complex soft tissues. The average absolute value of errors stays below 0.5 mm and the 95% percentiles of the distance map is less than 1.5 mm. NFM-EM promotes the accuracy and effectiveness of the soft tissue deformation, and the effective contact algorithm bridges the bone-related planning and the prediction of the target face.

  20. Improvement of airfoil trailing edge bluntness noise model

    Directory of Open Access Journals (Sweden)

    Wei Jun Zhu


    Full Text Available In this article, airfoil trailing edge bluntness noise is investigated using both computational aero-acoustic and semi-empirical approach. For engineering purposes, one of the most commonly used prediction tools for trailing edge noise are based on semi-empirical approaches, for example, the Brooks, Pope, and Marcolini airfoil noise prediction model developed by Brooks, Pope, and Marcolini (NASA Reference Publication 1218, 1989. It was found in previous study that the Brooks, Pope, and Marcolini model tends to over-predict noise at high frequencies. Furthermore, it was observed that this was caused by a lack in the model to predict accurately noise from blunt trailing edges. For more physical understanding of bluntness noise generation, in this study, we also use an advanced in-house developed high-order computational aero-acoustic technique to investigate the details associated with trailing edge bluntness noise. The results from the numerical model form the basis for an improved Brooks, Pope, and Marcolini trailing edge bluntness noise model.

  1. Cytomegalovirus Antivirals and Development of Improved Animal Models (United States)

    McGregor, Alistair; Choi, K. Yeon


    Introduction Cytomegalovirus (CMV) is a ubiquitous pathogen that establishes a life long asymptomatic infection in healthy individuals. Infection of immunesuppressed individuals causes serious illness. Transplant and AIDS patients are highly susceptible to CMV leading to life threatening end organ disease. Another vulnerable population is the developing fetus in utero, where congenital infection can result in surviving newborns with long term developmental problems. There is no vaccine licensed for CMV and current antivirals suffer from complications associated with prolonged treatment. These include drug toxicity and emergence of resistant strains. There is an obvious need for new antivirals. Candidate intervention strategies are tested in controlled pre-clinical animal models but species specificity of HCMV precludes the direct study of the virus in an animal model. Areas covered This review explores the current status of CMV antivirals and development of new drugs. This includes the use of animal models and the development of new improved models such as humanized animal CMV and bioluminescent imaging of virus in animals in real time. Expert Opinion Various new CMV antivirals are in development, some with greater spectrum of activity against other viruses. Although the greatest need is in the setting of transplant patients there remains an unmet need for a safe antiviral strategy against congenital CMV. This is especially important since an effective CMV vaccine remains an elusive goal. In this capacity greater emphasis should be placed on suitable pre-clinical animal models and greater collaboration between industry and academia. PMID:21883024

  2. Implications of improved Higgs mass calculations for supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, O. [Imperial College, London (United Kingdom). High Energy Physics Group; Dolan, M.J. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Theory Group; Ellis, J. [King' s College, London (United Kingdom). Theoretical Particle Physics and Cosmology Group; and others


    We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, M{sub h}, in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyze the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of BR(B{sub s}→μ{sup +}μ{sup -}) and ATLAS searches for E{sub T} events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours tan βsimilar 10, though not in the NUHM1 or NUHM2.

  3. Improving Permafrost Hydrology Prediction Through Data-Model Integration (United States)

    Wilson, C. J.; Andresen, C. G.; Atchley, A. L.; Bolton, W. R.; Busey, R.; Coon, E.; Charsley-Groffman, L.


    The CMIP5 Earth System Models were unable to adequately predict the fate of the 16GT of permafrost carbon in a warming climate due to poor representation of Arctic ecosystem processes. The DOE Office of Science Next Generation Ecosystem Experiment, NGEE-Arctic project aims to reduce uncertainty in the Arctic carbon cycle and its impact on the Earth's climate system by improved representation of the coupled physical, chemical and biological processes that drive how much buried carbon will be converted to CO2 and CH4, how fast this will happen, which form will dominate, and the degree to which increased plant productivity will offset increased soil carbon emissions. These processes fundamentally depend on permafrost thaw rate and its influence on surface and subsurface hydrology through thermal erosion, land subsidence and changes to groundwater flow pathways as soil, bedrock and alluvial pore ice and massive ground ice melts. LANL and its NGEE colleagues are co-developing data and models to better understand controls on permafrost degradation and improve prediction of the evolution of permafrost and its impact on Arctic hydrology. The LANL Advanced Terrestrial Simulator was built using a state of the art HPC software framework to enable the first fully coupled 3-dimensional surface-subsurface thermal-hydrology and land surface deformation simulations to simulate the evolution of the physical Arctic environment. Here we show how field data including hydrology, snow, vegetation, geochemistry and soil properties, are informing the development and application of the ATS to improve understanding of controls on permafrost stability and permafrost hydrology. The ATS is being used to inform parameterizations of complex coupled physical, ecological and biogeochemical processes for implementation in the DOE ACME land model, to better predict the role of changing Arctic hydrology on the global climate system. LA-UR-17-26566.

  4. Improving statistical reasoning theoretical models and practical implications

    CERN Document Server

    Sedlmeier, Peter


    This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.

  5. The semantic similarity ensemble

    Directory of Open Access Journals (Sweden)

    Andrea Ballatore


    Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.

  6. Improved Model Calibration From Genetically Adaptive Multi-Method Search (United States)

    Vrugt, J. A.; Robinson, B. A.


    Evolutionary optimization is a subject of intense interest in many fields of study, including computational chemistry, biology, bio-informatics, economics, computational science, geophysics and environmental science. The goal is to determine values for model parameters or state variables that provide the best possible solution to a predefined cost or objective function, or a set of optimal trade-off values in the case of two or more conflicting objectives. However, locating optimal solutions often turns out to be painstakingly tedious, or even completely beyond current or projected computational capacity. Here we present an innovative concept of genetically adaptive multi-algorithm optimization. Benchmark results show that this new optimization technique is significantly more efficient than current state-of-the-art evolutionary algorithms, approaching a factor of ten improvement for the more complex, higher dimensional optimization problems. Our new algorithm provides new opportunities for solving previously intractable environmental model calibration problems.

  7. Re-engineering pre-employment check-up systems: a model for improving health services. (United States)

    Rateb, Said Abdel Hakim; El Nouman, Azza Abdel Razek; Rateb, Moshira Abdel Hakim; Asar, Mohamed Naguib; El Amin, Ayman Mohammed; Gad, Saad abdel Aziz; Mohamed, Mohamed Salah Eldin


    The purpose of this paper is to develop a model for improving health services provided by the pre-employment medical fitness check-up system affiliated to Egypt's Health Insurance Organization (HIO). Operations research, notably system re-engineering, is used in six randomly selected centers and findings before and after re-engineering are compared. The re-engineering model follows a systems approach, focusing on three areas: structure, process and outcome. The model is based on six main components: electronic booking, standardized check-up processes, protected medical documents, advanced archiving through an electronic content management (ECM) system, infrastructure development, and capacity building. The model originates mainly from customer needs and expectations. The centers' monthly customer flow increased significantly after re-engineering. The mean time spent per customer cycle improved after re-engineering--18.3 +/- 5.5 minutes as compared to 48.8 +/- 14.5 minutes before. Appointment delay was also significantly decreased from an average 18 to 6.2 days. Both beneficiaries and service providers were significantly more satisfied with the services after re-engineering. The model proves that re-engineering program costs are exceeded by increased revenue. Re-engineering in this study involved multiple structure and process elements. The literature review did not reveal similar re-engineering healthcare packages. Therefore, each element was compared separately. This model is highly recommended for improving service effectiveness and efficiency. This research is the first in Egypt to apply the re-engineering approach to public health systems. Developing user-friendly models for service improvement is an added value.

  8. Internal Catchment Data for Improved Model Diagnosis and Calibration (United States)

    Goodrich, D. C.; Srinivasan, M.; McMillan, H.; Duncan, M.; Yatheendradas, S.; Wagener, T.; Clark, M.; Martinez, G.; Gupta, H.; Jackson, B.; Schmidt, J.; Woods, R.


    There have been numerous calls for the need to incorporate internal catchment observations for improving distributed catchment models. Recent results from a synthetic study by van Werkhoven et al., (GRL, 2008) imply that the relative worth of internal catchment observations for providing information to improve downstream predictions is limited to a time-varying zone, or cone of influence - that is, different observing points have explanatory power for different parts of the catchment at different times. In their study the spatial extent of this cone of influence is significantly influenced by a number of factors; primarily spatiotemporal precipitation patterns; but also initial conditions and inherent observational and model uncertainties. To explore this concept further two intensively instrumented experimental catchments, near end members of the hydro-climatic spectrum, with extensive internal observations were selected. The first is the 50 square kilometer Mahurangi Experimental Catchment located on the north island of New Zealand with mean annual rainfall and runoff of approximately 1700, and 870 mm, respectively. The second is the 148 square kilometer Walnut Gulch Experimental Watershed located in southeast Arizona, USA with respective mean annual rainfall and runoff of 325, and 2 mm. Data analysis and stepwise, spatially-explicit model calibration was conducted in each of these watersheds. Results from these analyses, in the context of the worth of internal runoff observations will be presented. van Werkhoven, K., T. Wagener, P. Reed, and Y. Tang (2008), Rainfall characteristics define the value of streamflow observations for distributed watershed model identification, Geophys. Res. Lett., 35, L11403, doi:10.1029/2008GL034162.

  9. Does NASA SMAP Improve the Accuracy of Power Outage Models? (United States)

    Quiring, S. M.; McRoberts, D. B.; Toy, B.; Alvarado, B.


    Electric power utilities make critical decisions in the days prior to hurricane landfall that are primarily based on the estimated impact to their service area. For example, utilities must determine how many repair crews to request from other utilities, the amount of material and equipment they will need to make repairs, and where in their geographically expansive service area to station crews and materials. Accurate forecasts of the impact of an approaching hurricane within their service area are critical for utilities in balancing the costs and benefits of different levels of resources. The Hurricane Outage Prediction Model (HOPM) are a family of statistical models that utilize predictions of tropical cyclone windspeed and duration of strong winds, along with power system and environmental variables (e.g., soil moisture, long-term precipitation), to forecast the number and location of power outages. This project assesses whether using NASA SMAP soil moisture improves the accuracy of power outage forecasts as compared to using model-derived soil moisture from NLDAS-2. A sensitivity analysis is employed since there have been very few tropical cyclones making landfall in the United States since SMAP was launched. The HOPM is used to predict power outages for 13 historical tropical cyclones and the model is run using twice, once with NLDAS soil moisture and once with SMAP soil moisture. Our results demonstrate that using SMAP soil moisture can have a significant impact on power outage predictions. SMAP has the potential to enhance the accuracy of power outage forecasts. Improved outage forecasts reduce the duration of power outages which reduces economic losses and accelerates recovery.

  10. Flooding Experiments and Modeling for Improved Reactor Safety

    International Nuclear Information System (INIS)

    Solmos, M.; Hogan, K.J.; VIerow, K.


    Countercurrent two-phase flow and 'flooding' phenomena in light water reactor systems are being investigated experimentally and analytically to improve reactor safety of current and future reactors. The aspects that will be better clarified are the effects of condensation and tube inclination on flooding in large diameter tubes. The current project aims to improve the level of understanding of flooding mechanisms and to develop an analysis model for more accurate evaluations of flooding in the pressurizer surge line of a Pressurized Water Reactor (PWR). Interest in flooding has recently increased because Countercurrent Flow Limitation (CCFL) in the AP600 pressurizer surge line can affect the vessel refill rate following a small break LOCA and because analysis of hypothetical severe accidents with the current flooding models in reactor safety codes shows that these models represent the largest uncertainty in analysis of steam generator tube creep rupture. During a hypothetical station blackout without auxiliary feedwater recovery, should the hot leg become voided, the pressurizer liquid will drain to the hot leg and flooding may occur in the surge line. The flooding model heavily influences the pressurizer emptying rate and the potential for surge line structural failure due to overheating and creep rupture. The air-water test results in vertical tubes are presented in this paper along with a semi-empirical correlation for the onset of flooding. The unique aspects of the study include careful experimentation on large-diameter tubes and an integrated program in which air-water testing provides benchmark knowledge and visualization data from which to conduct steam-water testing

  11. Formation of algae growth constitutive relations for improved algae modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Gharagozloo, Patricia E.; Drewry, Jessica Louise.


    This SAND report summarizes research conducted as a part of a two year Laboratory Directed Research and Development (LDRD) project to improve our abilities to model algal cultivation. Algae-based biofuels have generated much excitement due to their potentially large oil yield from relatively small land use and without interfering with the food or water supply. Algae mitigate atmospheric CO2 through metabolism. Efficient production of algal biofuels could reduce dependence on foreign oil by providing a domestic renewable energy source. Important factors controlling algal productivity include temperature, nutrient concentrations, salinity, pH, and the light-to-biomass conversion rate. Computational models allow for inexpensive predictions of algae growth kinetics in these non-ideal conditions for various bioreactor sizes and geometries without the need for multiple expensive measurement setups. However, these models need to be calibrated for each algal strain. In this work, we conduct a parametric study of key marine algae strains and apply the findings to a computational model.

  12. Similarity or difference?

    DEFF Research Database (Denmark)

    Villadsen, Anders Ryom


    While the organizational structures and strategies of public organizations have attracted substantial research attention among public management scholars, little research has explored how these organizational core dimensions are interconnected and influenced by pressures for similarity. In this p......While the organizational structures and strategies of public organizations have attracted substantial research attention among public management scholars, little research has explored how these organizational core dimensions are interconnected and influenced by pressures for similarity....... In this paper I address this topic by exploring the relation between expenditure strategy isomorphism and structure isomorphism in Danish municipalities. Different literatures suggest that organizations exist in concurrent pressures for being similar to and different from other organizations in their field...... of action. It is theorized that to meet this challenge organizations may substitute increased similarity on one core dimension for increased idiosyncrasy on another, but only after a certain level of isomorphism is reached. Results of quantitative analyses support this theory and show that an inverse U...

  13. Modelling future improvements in the St. Louis River fishery ... (United States)

    The presence of fish consumption advisories has a negative impact on fishing. In the St. Louis River, an important natural resource management goal is to reduce or eliminate fish consumption advisories by remediating contaminant sediments and improving aquatic habitat. However, we currently lack sufficient understanding to estimate the cumulative effects of these habitat improvements on fish contaminant burdens. To address this gap, our study had two main research objectives: first, to determine the relationship between game fish habitat use and polychlorinated biphenyls (PCBs) concentrations in the lower St. Louis River, and two, to calibrate and validate a habitat-based Biota-Sediment Accumulation Factor (BSAF) model that estimates fish PCBs concentration as a function of both sediment and habitat quality. We sampled two resident fishes, Yellow Perch (Perca flavescens) and Black Crappie (Pomoxis nigromaculatus), and two migratory fishes, Northern Pike (Esox lucius) and Walleye (Sander vitreus) of varying size and from locations spread across the St. Louis River estuary, the largest coastal wetland complex in Lake Superior. We found differences in contaminant concentration that were related to habitat usage, though results varied by species. For migratory fishes, increasing diet from Lake Superior was associated with decreasing PCBs concentration in tissue. For resident fishes, PCBs concentration was highest in the industrial portion of the river. Model calibra

  14. Improving student success using predictive models and data visualisations

    Directory of Open Access Journals (Sweden)

    Hanan Ayad


    Full Text Available The need to educate a competitive workforce is a global problem. In the US, for example, despite billions of dollars spent to improve the educational system, approximately 35% of students never finish high school. The drop rate among some demographic groups is as high as 50–60%. At the college level in the US only 30% of students graduate from 2-year colleges in 3 years or less and approximately 50% graduate from 4-year colleges in 5 years or less. A basic challenge in delivering global education, therefore, is improving student success. By student success we mean improving retention, completion and graduation rates. In this paper we describe a Student Success System (S3 that provides a holistic, analytical view of student academic progress.1 The core of S3 is a flexible predictive modelling engine that uses machine intelligence and statistical techniques to identify at-risk students pre-emptively. S3 also provides a set of advanced data visualisations for reaching diagnostic insights and a case management tool for managing interventions. S3's open modular architecture will also allow integration and plug-ins with both open and proprietary software. Powered by learning analytics, S3 is intended as an end-to-end solution for identifying at-risk students, understanding why they are at risk, designing interventions to mitigate that risk and finally closing the feedback look by tracking the efficacy of the applied intervention.

  15. Assessment of Th17/Treg cells and Th cytokines in an improved immune thrombocytopenia mouse model. (United States)

    Zhang, Guoyang; Zhang, Ping; Liu, Hongyun; Liu, Xiaoyan; Xie, Shuangfeng; Wang, Xiuju; Wu, Yudan; Chang, Jianxing; Ma, Liping


    The improved passive immune thrombocytopenia (ITP) mouse model has been extensively utilized for the study of ITP. However, how closely this model matches the human inflammation state and immune background is unclear. Our study aimed to explore the profile of Th cytokines and Th17/Treg cells in the model. We induced the ITP mouse model by dose-escalation injection of MWReg30. The serum levels of cytokines (IFN-γ, IL-2, IL-4, IL-10, IL-17A, and TGF-β1) were measured by enzyme-linked immunosorbent assay and the frequency of Th17 and Treg cells was measured by flow cytometry. The mRNA expression of Foxp3 and RORrt was measured by real-time PCR. The serum levels of cytokines IFN-γ, TGF-β1, IL-4, and IL-10 were significantly lower in ITP mice. The secretion of serum proinflammatory cytokines IL-2 and IL-17A and the percentage of Th17 cells showed no statistically significant increase. In ITP mice the frequency of Treg cells and mRNA expression of Foxp3 was significantly lower in splenocytes. Our data suggest that the improved passive ITP mouse model does not mimic the autoimmune inflammatory process of human ITP. Compared with human ITP, this model has a similar change in frequency of Treg cells, which may directly or indirectly result from antibody-mediated platelet destruction due to attenuated release of TGF-β.


    Energy Technology Data Exchange (ETDEWEB)

    Tidwell, Vincent Carroll; Sue Tillery; Phillip King


    A new option for Local Time-Stepping (LTS) was developed to use in conjunction with the multiple-refined-area grid capability of the U.S. Geological Survey's (USGS) groundwater modeling program, MODFLOW-LGR (MF-LGR). The LTS option allows each local, refined-area grid to simulate multiple stress periods within each stress period of a coarser, regional grid. This option is an alternative to the current method of MF-LGR whereby the refined grids are required to have the same stress period and time-step structure as the coarse grid. The MF-LGR method for simulating multiple-refined grids essentially defines each grid as a complete model, then for each coarse grid time-step, iteratively runs each model until the head and flux changes at the interfacing boundaries of the models are less than some specified tolerances. Use of the LTS option is illustrated in two hypothetical test cases consisting of a dual well pumping system and a hydraulically connected stream-aquifer system, and one field application. Each of the hypothetical test cases was simulated with multiple scenarios including an LTS scenario, which combined a monthly stress period for a coarse grid model with a daily stress period for a refined grid model. The other scenarios simulated various combinations of grid spacing and temporal refinement using standard MODFLOW model constructs. The field application simulated an irrigated corridor along the Lower Rio Grande River in New Mexico, with refinement of a small agricultural area in the irrigated corridor.The results from the LTS scenarios for the hypothetical test cases closely replicated the results from the true scenarios in the refined areas of interest. The head errors of the LTS scenarios were much smaller than from the other scenarios in relation to the true solution, and the run times for the LTS models were three to six times faster than the true models for the dual well and stream-aquifer test cases, respectively. The results of the field

  17. Making Benefit Transfers Work: Deriving and Testing Principles for Value Transfers for Similar and Dissimilar Sites Using a Case Study of the Non-Market Benefits of Water Quality Improvements Across Europe

    DEFF Research Database (Denmark)

    Bateman, Ian; Brouwer, Roy; Ferreri, Silvia


    We implement a controlled, multi-site experiment to develop and test guidance principles for benefits transfers. These argue that when transferring across relatively similar sites, simple mean value transfers are to be preferred but that when sites are relatively dissimilar then value function...... transfers will yield lower errors. The paper also provides guidance on the appropriate specification of transferable value functions arguing that these should be developed from theoretical rather than ad-hoc statistical approaches. These principles are tested via a common format valuation study of water...... quality improvements across five countries. While this provides an idealised tested, results support the above principles and suggest directions for future transfer studies....

  18. Similarities and Differences between Individuals Seeking Treatment for Gambling Problems vs. Alcohol and Substance Use Problems in Relation to the Progressive Model of Self-stigma

    Directory of Open Access Journals (Sweden)

    Belle Gavriel-Fried


    Full Text Available Aims: People with gambling as well as substance use problems who are exposed to public stigmatization may internalize and apply it to themselves through a mechanism known as self-stigma. This study implemented the Progressive Model for Self-Stigma which consists four sequential interrelated stages: awareness, agreement, application and harm on three groups of individuals with gambling, alcohol and other substance use problems. It explored whether the two guiding assumptions of this model (each stage is precondition for the following stage which are trickle-down in nature, and correlations between proximal stages should be larger than correlations between more distant stages would differentiate people with gambling problems from those with alcohol and other substance use problems in terms of their patterns of self-stigma and in terms of the stages in the model.Method: 37 individuals with gambling problems, 60 with alcohol problems and 51 with drug problems who applied for treatment in rehabilitation centers in Israel in 2015–2016 were recruited. They completed the Self-stigma of Mental Illness Scale-Short Form which was adapted by changing the term “mental health” to gambling, alcohol or drugs, and the DSM-5-diagnostic criteria for gambling, alcohol or drug disorder.Results: The assumptions of the model were broadly confirmed: a repeated measures ANCOVA revealed that in all three groups there was a difference between first two stages (aware and agree and the latter stages (apply and harm. In addition, the gambling group differed from the drug use and alcohol groups on the awareness stage: individuals with gambling problems were less likely to be aware of stigma than people with substance use or alcohol problems.Conclusion: The internalization of stigma among individuals with gambling problems tends to work in a similar way as for those with alcohol or drug problems. The differences between the gambling group and the alcohol and other

  19. An improved reprogrammable mouse model harbouring the reverse tetracycline-controlled transcriptional transactivator 3

    Directory of Open Access Journals (Sweden)

    S. Alaei


    Full Text Available Reprogrammable mouse models engineered to conditionally express Oct-4, Klf-4, Sox-2 and c-Myc (OKSM have been instrumental in dissecting molecular events underpinning the generation of induced pluripotent stem cells. However, until now these models have been reported in the context of the m2 reverse tetracycline-controlled transactivator, which results in low reprogramming efficiency and consequently limits the number of reprogramming intermediates that can be isolated for downstream profiling. Here, we describe an improved OKSM mouse model in the context of the reverse tetracycline-controlled transactivator 3 with enhanced reprogramming efficiency (>9-fold and increased numbers of reprogramming intermediate cells albeit with similar kinetics, which we believe will facilitate mechanistic studies of the reprogramming process.

  20. A workflow learning model to improve geovisual analytics utility. (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A


    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  1. Improvement of snowpack simulations in a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Jin, J.; Miller, N.L.


    To improve simulations of regional-scale snow processes and related cold-season hydroclimate, the Community Land Model version 3 (CLM3), developed by the National Center for Atmospheric Research (NCAR), was coupled with the Pennsylvania State University/NCAR fifth-generation Mesoscale Model (MM5). CLM3 physically describes the mass and heat transfer within the snowpack using five snow layers that include liquid water and solid ice. The coupled MM5–CLM3 model performance was evaluated for the snowmelt season in the Columbia River Basin in the Pacific Northwestern United States using gridded temperature and precipitation observations, along with station observations. The results from MM5–CLM3 show a significant improvement in the SWE simulation, which has been underestimated in the original version of MM5 coupled with the Noah land-surface model. One important cause for the underestimated SWE in Noah is its unrealistic land-surface structure configuration where vegetation, snow and the topsoil layer are blended when snow is present. This study demonstrates the importance of the sheltering effects of the forest canopy on snow surface energy budgets, which is included in CLM3. Such effects are further seen in the simulations of surface air temperature and precipitation in regional weather and climate models such as MM5. In addition, the snow-season surface albedo overestimated by MM5–Noah is now more accurately predicted by MM5–CLM3 using a more realistic albedo algorithm that intensifies the solar radiation absorption on the land surface, reducing the strong near-surface cold bias in MM5–Noah. The cold bias is further alleviated due to a slower snowmelt rate in MM5–CLM3 during the early snowmelt stage, which is closer to observations than the comparable components of MM5–Noah. In addition, the over-predicted precipitation in the Pacific Northwest as shown in MM5–Noah is significantly decreased in MM5 CLM3 due to the lower evaporation resulting from the

  2. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.


    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  3. Improvement of Electron Beam Lithography modeling for overdose exposures by using Dill transformation (United States)

    Abaidi, Mohamed; Saib, Mohamed; Tortai, Jean-Hervé; Schiavone, Patrick


    In Electron Beam Lithography (EBL), the modeling of the Proximity Effects (PE) is the key to successfully print patterns of different size and density at the desired dimension. Although current PE models are increasingly efficient for nominal process conditions, they do not allow covering a broad exposure dose range, which would be interesting for extending the process window, for instance. This paper shows how to improve the accuracy of the dimension estimations of overexposed patterns by EBL by adding a new term to the existing compact model. This advanced compact model was inspired by the chemical mechanisms that activate the acid generator embedded in the resist during the EBL exposure. Most of the existing compact models use the electronic Aerial Image (E AEI) calculated by the convolution product of the patterns geometry with a Point Spread Function (PSF) and extract pattern contours using a threshold value to model the non-linear resist behavior [1]. Here the patterns contours are simulated using an Acid Aerial Image (A AEI) calculated from the initial E_AEI complemented by the Dill transformation [1]. A strong impact is expected at high exposure doses but no changes should occur on patterns exposed close to their nominal dose. The modeling and calibration capabilities of Inscale® software was used to validate the new model with experimental measurements. Calibration and simulations obtained with both standard model and advanced model were compared on a test design. First it shows that after calibration the PSF of the two models are similar, meaning that physics is consistent for both models. The new advanced model allows maintaining the accuracy at nominal dose but increases the overall accuracy by 62 % for a process window of dose with latitude extended up to 20%.

  4. Reducing Health Care Costs and Improving Clinical Outcomes Using an Improved Asheville Project Model

    Directory of Open Access Journals (Sweden)

    Barry A. Bunting


    Full Text Available This study was designed to add to the body of knowledge gained through the original Asheville Project studies, and to address some of the limitations of the earlier studies. Scalability. Since the original Asheville Project publications there have been some successful replications, however, there is a need to broaden the geographic scope and increase the size of the study population. Study Design. Previous studies were limited to pre-post, self-as-control design. We added a control group. Model improvement. We were able to incorporate an electronic record of care. This allows incorporation of medical and prescription claims, ease of documentation, improved data capture, reporting, standardization of care, identification of deficiencies in care, and communication with other health care providers. This enhancement may be worthy of more comment than we devoted to it , however, we didn’t want to detract from the main goal of the study, and we wanted to avoid any hint of commercialization on the part of the organization that provided the electronic record. Relevance to profession. We sincerely hope the relevance goes beyond the profession of pharmacy and that it reinforces the message that the profession of pharmacy offers real solutions to rising health care costs in the U.S.   Type: Original Research

  5. Improved SVR Model for Multi-Layer Buildup Factor Calculation

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.


    The accuracy of point kernel method applied in gamma ray dose rate calculations in shielding design and radiation safety analysis is limited by the accuracy of buildup factors used in calculations. Although buildup factors for single-layer shields are well defined and understood, buildup factors for stratified shields represent a complex physical problem that is hard to express in mathematical terms. The traditional approach for expressing buildup factors of multi-layer shields is through semi-empirical formulas obtained by fitting the results of transport theory or Monte Carlo calculations. Such an approach requires an ad-hoc definition of the fitting function and often results with numerous and usually inadequately explained and defined correction factors added to the final empirical formula. Even more, finally obtained formulas are generally limited to a small number of predefined combinations of materials within relatively small range of gamma ray energies and shield thicknesses. Recently, a new approach has been suggested by the authors involving one of machine learning techniques called Support Vector Machines, i.e., Support Vector Regression (SVR). Preliminary investigations performed for double-layer shields revealed great potential of the method, but also pointed out some drawbacks of the developed model, mostly related to the selection of one of the parameters describing the problem (material atomic number), and the method in which the model was designed to evolve during the learning process. It is the aim of this paper to introduce a new parameter (single material buildup factor) that is to replace the existing material atomic number as an input parameter. The comparison of two models generated by different input parameters has been performed. The second goal is to improve the evolution process of learning, i.e., the experimental computational procedure that provides a framework for automated construction of complex regression models of predefined

  6. Wrong, but useful: regional species distribution models may not be improved by range-wide data under biased sampling. (United States)

    El-Gabbas, Ahmed; Dormann, Carsten F


    Species distribution modeling (SDM) is an essential method in ecology and conservation. SDMs are often calibrated within one country's borders, typically along a limited environmental gradient with biased and incomplete data, making the quality of these models questionable. In this study, we evaluated how adequate are national presence-only data for calibrating regional SDMs. We trained SDMs for Egyptian bat species at two different scales: only within Egypt and at a species-specific global extent. We used two modeling algorithms: Maxent and elastic net, both under the point-process modeling framework. For each modeling algorithm, we measured the congruence of the predictions of global and regional models for Egypt, assuming that the lower the congruence, the lower the appropriateness of the Egyptian dataset to describe the species' niche. We inspected the effect of incorporating predictions from global models as additional predictor ("prior") to regional models, and quantified the improvement in terms of AUC and the congruence between regional models run with and without priors. Moreover, we analyzed predictive performance improvements after correction for sampling bias at both scales. On average, predictions from global and regional models in Egypt only weakly concur. Collectively, the use of priors did not lead to much improvement: similar AUC and high congruence between regional models calibrated with and without priors. Correction for sampling bias led to higher model performance, whatever prior used, making the use of priors less pronounced. Under biased and incomplete sampling, the use of global bats data did not improve regional model performance. Without enough bias-free regional data, we cannot objectively identify the actual improvement of regional models after incorporating information from the global niche. However, we still believe in great potential for global model predictions to guide future surveys and improve regional sampling in data

  7. Incremental Similarity and Turbulence

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.; Hedevang, Emil; Schmiegel, Jürgen

    This paper discusses the mathematical representation of an empirically observed phenomenon, referred to as Incremental Similarity. We discuss this feature from the viewpoint of stochastic processes and present a variety of non-trivial examples, including those that are of relevance for turbulence...


    International Nuclear Information System (INIS)

    Vierow, Karen


    This project is investigating countercurrent flow and 'flooding' phenomena in light water reactor systems to improve reactor safety of current and future reactors. To better understand the occurrence of flooding in the surge line geometry of a PWR, two experimental programs were performed. In the first, a test facility with an acrylic test section provided visual data on flooding for air-water systems in large diameter tubes. This test section also allowed for development of techniques to form an annular liquid film along the inner surface of the 'surge line' and other techniques which would be difficult to verify in an opaque test section. Based on experiences in the air-water testing and the improved understanding of flooding phenomena, two series of tests were conducted in a large-diameter, stainless steel test section. Air-water test results and steam-water test results were directly compared to note the effect of condensation. Results indicate that, as for smaller diameter tubes, the flooding phenomena is predominantly driven by the hydrodynamics. Tests with the test sections inclined were attempted but the annular film was easily disrupted. A theoretical model for steam venting from inclined tubes is proposed herein and validated against air-water data. Empirical correlations were proposed for air-water and steam-water data. Methods for developing analytical models of the air-water and steam-water systems are discussed, as is the applicability of the current data to the surge line conditions. This report documents the project results from July 1, 2005 through June 30, 2008


    Energy Technology Data Exchange (ETDEWEB)

    Vierow, Karen


    This project is investigating countercurrent flow and “flooding” phenomena in light water reactor systems to improve reactor safety of current and future reactors. To better understand the occurrence of flooding in the surge line geometry of a PWR, two experimental programs were performed. In the first, a test facility with an acrylic test section provided visual data on flooding for air-water systems in large diameter tubes. This test section also allowed for development of techniques to form an annular liquid film along the inner surface of the “surge line” and other techniques which would be difficult to verify in an opaque test section. Based on experiences in the air-water testing and the improved understanding of flooding phenomena, two series of tests were conducted in a large-diameter, stainless steel test section. Air-water test results and steam-water test results were directly compared to note the effect of condensation. Results indicate that, as for smaller diameter tubes, the flooding phenomena is predominantly driven by the hydrodynamics. Tests with the test sections inclined were attempted but the annular film was easily disrupted. A theoretical model for steam venting from inclined tubes is proposed herein and validated against air-water data. Empirical correlations were proposed for air-water and steam-water data. Methods for developing analytical models of the air-water and steam-water systems are discussed, as is the applicability of the current data to the surge line conditions. This report documents the project results from July 1, 2005 through June 30, 2008.

  10. Modeling of Glass Making Processes for Improved Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Thomas P. Seward III


    The overall goal of this project was to develop a high-temperature melt properties database with sufficient reliability to allow mathematical modeling of glass melting and forming processes for improved product quality, improved efficiency and lessened environmental impact. It was initiated by the United States glass industry through the NSF Industry/University Center for Glass Research (CGR) at Alfred University [1]. Because of their important commercial value, six different types/families of glass were studied: container, float, fiberglass (E- and wool-types), low-expansion borosilicate, and color TV panel glasses. CGR member companies supplied production-quality glass from all six families upon which we measured, as a function of temperature in the molten state, density, surface tension, viscosity, electrical resistivity, infrared transmittance (to determine high temperature radiative conductivity), non-Newtonian flow behavior, and oxygen partial pres sure. With CGR cost sharing, we also studied gas solubility and diffusivity in each of these glasses. Because knowledge of the compositional dependencies of melt viscosity and electrical resistivity are extremely important for glass melting furnace design and operation, these properties were studied more fully. Composition variations were statistically designed for all six types/families of glass. About 140 different glasses were then melted on a laboratory scale and their viscosity and electrical resistivity measured as a function of temperature. The measurements were completed in February 2003 and are reported on here. The next steps will be (1) to statistically analyze the compositional dependencies of viscosity and electrical resistivity and develop composition-property response surfaces, (2) submit all the data to CGR member companies to evaluate the usefulness in their models, and (3) publish the results in technical journals and most likely in book form.

  11. Modafinil improves monocrotaline-induced pulmonary hypertension rat model. (United States)

    Lee, Hyeryon; Kim, Kwan Chang; Cho, Min-Sun; Suh, Suk-Hyo; Hong, Young Mi


    Pulmonary arterial hypertension (PAH) progressively leads to increases in pulmonary vasoconstriction. Modafinil plays a role in vasorelaxation and blocking KCa3.1 channel with a result of elevating intracellular cyclic adenosine monophosphate (cAMP) levels. The purpose of this study is to evaluate the effects on modafinil in monocrotaline (MCT)-induced PAH rat. The rats were separated into three groups: the control group, the monocrotaline (M) group (MCT 60 mg/kg), and the modafinil (MD) group (MCT 60 mg/kg + modafinil). Reduced right ventricular pressure (RVP) was observed in the MD group. Right ventricular hypertrophy was improved in the MD group. Reduced number of intra-acinar pulmonary arteries and medial wall thickness were noted in the MD group. After the administration of modafinil, protein expressions of endothelin-1 (ET-1), endothelin receptor A (ERA) and KCa3.1 channel were significantly reduced. Modafinil suppressed pulmonary artery smooth muscle cell (PASMC) proliferation via cAMP and KCa3.1 channel. Additionally, we confirmed protein expressions such as Bcl-2-associated X, vascular endothelial growth factor, tumor necrosis factor-α, and interleukin-6 were reduced in the MD group. Modafinil improved PAH by vasorelaxation and a decrease in medial thickening via ET-1, ERA, and KCa3.1 down regulation. This is a meaningful study of a modafinil in PAH model.

  12. SIFT Based Vein Recognition Models: Analysis and Improvement

    Directory of Open Access Journals (Sweden)

    Guoqing Wang


    Full Text Available Scale-Invariant Feature Transform (SIFT is being investigated more and more to realize a less-constrained hand vein recognition system. Contrast enhancement (CE, compensating for deficient dynamic range aspects, is a must for SIFT based framework to improve the performance. However, evidence of negative influence on SIFT matching brought by CE is analysed by our experiments. We bring evidence that the number of extracted keypoints resulting by gradient based detectors increases greatly with different CE methods, while on the other hand the matching result of extracted invariant descriptors is negatively influenced in terms of Precision-Recall (PR and Equal Error Rate (EER. Rigorous experiments with state-of-the-art and other CE adopted in published SIFT based hand vein recognition system demonstrate the influence. What is more, an improved SIFT model by importing the kernel of RootSIFT and Mirror Match Strategy into a unified framework is proposed to make use of the positive keypoints change and make up for the negative influence brought by CE.

  13. Automated geographic atrophy segmentation for SD-OCT images using region-based C-V model via local similarity factor. (United States)

    Niu, Sijie; de Sisternes, Luis; Chen, Qiang; Leng, Theodore; Rubin, Daniel L


    Age-related macular degeneration (AMD) is the leading cause of blindness among elderly individuals. Geographic atrophy (GA) is a phenotypic manifestation of the advanced stages of non-exudative AMD. Determination of GA extent in SD-OCT scans allows the quantification of GA-related features, such as radius or area, which could be of important value to monitor AMD progression and possibly identify regions of future GA involvement. The purpose of this work is to develop an automated algorithm to segment GA regions in SD-OCT images. An en face GA fundus image is generated by averaging the axial intensity within an automatically detected sub-volume of the three dimensional SD-OCT data, where an initial coarse GA region is estimated by an iterative threshold segmentation method and an intensity profile set, and subsequently refined by a region-based Chan-Vese model with a local similarity factor. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to quantitatively evaluate the automated segmentation algorithm. We compared results obtained by the proposed algorithm, manual segmentation by graders, a previously proposed method, and experimental commercial software. When compared to a manually determined gold standard, our algorithm presented a mean overlap ratio (OR) of 81.86% and 70% for the first and second data sets, respectively, while the previously proposed method OR was 72.60% and 65.88% for the first and second data sets, respectively, and the experimental commercial software OR was 62.40% for the second data set.

  14. Morphology transition of raft-model membrane induced by osmotic pressure: Formation of double-layered vesicle similar to an endo- and/or exocytosis

    International Nuclear Information System (INIS)

    Onai, Teruaki; Hirai, Mitsuhiro


    The effect of osmotic pressure on the structure of large uni-lamellar vesicle (LUV) of the lipid mixtures of monosialoganglioside (G M1 )-cholesterol-dioleoyl-phosphatidylcholine (DOPC) was studies by using wide-angle X-ray scattering (WAXS) method. The molar ratios of the mixtures were 0.1/0.1/1, 0/0.1/1, and 0/0/1. The ternary lipid mixture is a model of lipid rafts. The value of osmotic pressure was varied from 0 to 4.16x10 5 N/m 2 by adding the polyvinylpyrrolidone (PVP) in the range from 0 to 25 % w/v. In the case of the mixtures without G M1 , the rise of the osmotic pressure just enhances the multi-lamellar stacking with deceasing the inter-lamellar spacing. On the other hand, the mixture containing G M1 shows the structural transition from a uni-lamellar vesicle to a double-layered vesicle (a liposome including a smaller one inside) by the rise of osmotic pressure. In this morphology transition the total surface area of the double-layered vesicle is mostly as same as that of the LUV at the initial state. The polar head region of G M1 is bulky and highly hydrophilic due to the oligosaccharide chain containing a sialic acid residue. Then, the present results suggest that the existence of G M1 in the outer-leaflet of the LUV is essentially important for such a double-layered vesicle formation. Alternatively, a phenomenon similar to an endo- and/or exocytosis in cells can be caused simply by a variation of osmotic pressure.

  15. Implementing the Mother-Baby Model of Nursing Care Using Models and Quality Improvement Tools. (United States)

    Brockman, Vicki

    As family-centered care has become the expected standard, many facilities follow the mother-baby model, in which care is provided to both a woman and her newborn in the same room by the same nurse. My facility employed a traditional model of nursing care, which was not evidence-based or financially sustainable. After implementing the mother-baby model, we experienced an increase in exclusive breastfeeding rates at hospital discharge, increased patient satisfaction, improved staff productivity and decreased salary costs, all while the number of births increased. Our change was successful because it was guided by the use of quality improvement tools, change theory and evidence-based practice models. © 2015 AWHONN.

  16. Similarities and Differences in Genetics. (United States)

    Zhang, Yang; Sun, Yan; Liang, Jie; Lu, Lin; Shi, Jie


    Similar symptomatology manifestations and high co-morbidity in substance and non-substance addictions suggest that there may be a common pathogenesis between them. Associated with impulse control and emotional processing, the monoamine neurotransmitter system genes are suggested to be related to both substance and non-substance addictions, such as dopamine (DA) system, 5-hydroxytryptamine/serotonin (5-HT) system, the endogenous opioid system and so on. Here we reviewed the similarities and differences in genetics between classic substance addiction and common types of non-substance addiction, e.g. pathological gambling, Internet addiction, binge-eating disorder etc. It is necessary to systematically compare genetic mechanisms of non-substance addiction and substance addiction, which could reveal similarities and differences of substance addiction and non-addictive substances essentially, enhance our understanding of addiction theory and improve clinical practice with research results.

  17. Improved spring model-based collaborative indoor visible light positioning (United States)

    Luo, Zhijie; Zhang, WeiNan; Zhou, GuoFu


    Gaining accuracy with indoor positioning of individuals is important as many location-based services rely on the user's current position to provide them with useful services. Many researchers have studied indoor positioning techniques based on WiFi and Bluetooth. However, they have disadvantages such as low accuracy or high cost. In this paper, we propose an indoor positioning system in which visible light radiated from light-emitting diodes is used to locate the position of receivers. Compared with existing methods using light-emitting diode light, we present a high-precision and simple implementation collaborative indoor visible light positioning system based on an improved spring model. We first estimate coordinate position information using the visible light positioning system, and then use the spring model to correct positioning errors. The system can be employed easily because it does not require additional sensors and the occlusion problem of visible light would be alleviated. We also describe simulation experiments, which confirm the feasibility of our proposed method.

  18. Improvement on The Ellis and Roberts Viability Model

    Directory of Open Access Journals (Sweden)

    Guoyan Zhou


    Full Text Available With data sets of germination percent and storage time of seed lot of wheat and sorghum stored at three different storage temperature(t, °C with three different water content (m, % of seeds, together with data set of buckwheat and lettuce reported in literatures, the possibility that seed survival curve were transformed into line by survival proportion and the relationship that logarithm of average viability period (logp50 and standard deviation of seed death distribution in time (δwith t, m and interaction between t and m were analysed. Result indicated that survival proportion transformed seed survival curve to line were much easier than the probability adopted by Ellis and Roberts, and the most important factor affecting logp50 and δ of seed lot was interaction between t and m. Thus, Ellis and Roberts viability model were suggested to be improved as Ki=Vi-p/10K-CWT (t×m to predict longevity of seed lot with initial germination percent unknown, a new model of Gi/G0=A-P/10K-CWT(t×m was constructed to predict longevity of seed lot with initial germination percent already known.

  19. Improved data for integrated modeling of global environmental change (United States)

    Lotze-Campen, Hermann


    The assessment of global environmental changes, their impact on human societies, and possible management options requires large-scale, integrated modeling efforts. These models have to link biophysical with socio-economic processes, and they have to take spatial heterogeneity of environmental conditions into account. Land use change and freshwater use are two key research areas where spatial aggregation and the use of regional average numbers may lead to biased results. Useful insights can only be obtained if processes like economic globalization can be consistently linked to local environmental conditions and resource constraints (Lambin and Meyfroidt 2011). Spatially explicit modeling of environmental changes at the global scale has a long tradition in the natural sciences (Woodward et al 1995, Alcamo et al 1996, Leemans et al 1996). Socio-economic models with comparable spatial detail, e.g. on grid-based land use change, are much less common (Heistermann et al 2006), but are increasingly being developed (Popp et al 2011, Schneider et al 2011). Spatially explicit models require spatially explicit input data, which often constrains their development and application at the global scale. The amount and quality of available data on environmental conditions is growing fast—primarily due to improved earth observation methods. Moreover, systematic efforts for collecting and linking these data across sectors are on the way ( This has, among others, also helped to provide consistent databases on different land cover and land use types (Erb et al 2007). However, spatially explicit data on specific anthropogenic driving forces of global environmental change are still scarce—also because these cannot be collected with satellites or other devices. The basic data on socio-economic driving forces, i.e. population density and wealth (measured as gross domestic product per capita), have been prepared for spatially explicit analyses (CIESIN, IFPRI

  20. Pharmacophore-based similarity scoring for DOCK. (United States)

    Jiang, Lingling; Rizzo, Robert C


    Pharmacophore modeling incorporates geometric and chemical features of known inhibitors and/or targeted binding sites to rationally identify and design new drug leads. In this study, we have encoded a three-dimensional pharmacophore matching similarity (FMS) scoring function into the structure-based design program DOCK. Validation and characterization of the method are presented through pose reproduction, crossdocking, and enrichment studies. When used alone, FMS scoring dramatically improves pose reproduction success to 93.5% (∼20% increase) and reduces sampling failures to 3.7% (∼6% drop) compared to the standard energy score (SGE) across 1043 protein-ligand complexes. The combined FMS+SGE function further improves success to 98.3%. Crossdocking experiments using FMS and FMS+SGE scoring, for six diverse protein families, similarly showed improvements in success, provided proper pharmacophore references are employed. For enrichment, incorporating pharmacophores during sampling and scoring, in most cases, also yield improved outcomes when docking and rank-ordering libraries of known actives and decoys to 15 systems. Retrospective analyses of virtual screenings to three clinical drug targets (EGFR, IGF-1R, and HIVgp41) using X-ray structures of known inhibitors as pharmacophore references are also reported, including a customized FMS scoring protocol to bias on selected regions in the reference. Overall, the results and fundamental insights gained from this study should benefit the docking community in general, particularly researchers using the new FMS method to guide computational drug discovery with DOCK.

  1. Improving permafrost distribution modelling using feature selection algorithms (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail


    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  2. A conceptual model to improve performance in virtual teams

    Directory of Open Access Journals (Sweden)

    Shopee Dube


    Full Text Available Background: The vast improvement in communication technologies and sophisticated project management tools, methods and techniques has allowed geographically and culturally diverse groups to operate and function in a virtual environment. To succeed in this virtual environment where time and space are becoming increasingly irrelevant, organisations must define new ways of implementing initiatives. This virtual environment phenomenon has brought about the formation of virtual project teams that allow organisations to harness the skills and knowhow of the best resources, irrespective of their location. Objectives: The aim of this article was to investigate performance criteria and develop a conceptual model which can be applied to enhance the success of virtual project teams. There are no clear guidelines of the performance criteria in managing virtual project teams. Method: A qualitative research methodology was used in this article. The purpose of content analysis was to explore the literature to understand the concept of performance in virtual project teams and to summarise the findings of the literature reviewed. Results: The research identified a set of performance criteria for the virtual project teams as follows: leadership, trust, communication, team cooperation, reliability, motivation, comfort and social interaction. These were used to conceptualise the model. Conclusion: The conceptual model can be used in a holistic way to determine the overall performance of the virtual project team, but each factor can be analysed individually to determine the impact on the overall performance. The knowledge of performance criteria for virtual project teams could aid project managers in enhancing the success of these teams and taking a different approach to better manage and coordinate them.

  3. Similarity and denoising. (United States)

    Vitányi, Paul M B


    We can discover the effective similarity among pairs of finite objects and denoise a finite object using the Kolmogorov complexity of these objects. The drawback is that the Kolmogorov complexity is not computable. If we approximate it, using a good real-world compressor, then it turns out that on natural data the processes give adequate results in practice. The methodology is parameter-free, alignment-free and works on individual data. We illustrate both methods with examples.

  4. Visualizing multiple word similarity measures. (United States)

    Kievit-Kylar, Brent; Jones, Michael N


    Although many recent advances have taken place in corpus-based tools, the techniques used to guide exploration and evaluation of these systems have advanced little. Typically, the plausibility of a semantic space is explored by sampling the nearest neighbors to a target word and evaluating the neighborhood on the basis of the modeler's intuition. Tools for visualization of these large-scale similarity spaces are nearly nonexistent. We present a new open-source tool to plot and visualize semantic spaces, thereby allowing researchers to rapidly explore patterns in visual data that describe the statistical relations between words. Words are visualized as nodes, and word similarities are shown as directed edges of varying strengths. The "Word-2-Word" visualization environment allows for easy manipulation of graph data to test word similarity measures on their own or in comparisons between multiple similarity metrics. The system contains a large library of statistical relationship models, along with an interface to teach them from various language sources. The modularity of the visualization environment allows for quick insertion of new similarity measures so as to compare new corpus-based metrics against the current state of the art. The software is available at

  5. Application and improvement of Raupach's shear stress partitioning model (United States)

    Walter, B. A.; Lehning, M.; Gromke, C.


    Aeolian processes such as the entrainment, transport and redeposition of sand, soil or snow are able to significantly reshape the earth's surface. In times of increasing desertification and land degradation, often driven by wind erosion, investigations of aeolian processes become more and more important in environmental sciences. The reliable prediction of the sheltering effect of vegetation canopies against sediment erosion, for instance, is a clear practical application of such investigations to identify suitable and sustainable counteractive measures against wind erosion. This study presents an application and improvement of a theoretical model presented by Raupach (Boundary-Layer Meteorology, 1992, Vol.60, 375-395 and Journal of Geophysical Research, 1993, Vol.98, 3023-3029) which allows for quantifying the sheltering effect of vegetation against sediment erosion. The model predicts the shear stress ratios τS'/τ and τS''/τ. Here, τS is the part of the total shear stress τ that acts on the ground beneath the plants. The spatial peak τS'' of the surface shear stress is responsible for the onset of particle entrainment whereas the spatial mean τS' can be used to quantify particle mass fluxes. The precise and accurate prediction of these quantities is essential when modeling wind erosion. Measurements of the surface shear stress distributions τS(x,y) on the ground beneath live vegetation canopies (plant species: lolium perenne) were performed in a controlled wind tunnel environment to determine the model parameters and to evaluate the model performance. Rigid, non-porous wooden blocks instead of the plants were additionally tested for the purpose of comparison, since previous wind tunnel studies used exclusively artificial plant imitations for their experiments on shear stress partitioning. The model constant c, which is needed to determine the total stress τ for a canopy of interest and which remained rather unspecified to date, was found to be c ≈ 0

  6. Explicit Modeling of Ancestry Improves Polygenic Risk Scores and BLUP Prediction. (United States)

    Chen, Chia-Yen; Han, Jiali; Hunter, David J; Kraft, Peter; Price, Alkes L


    Polygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color (HC), tanning ability (TA), and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRSs) and best linear unbiased prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R(2) for HC increased by 66% (0.0456-0.0755; P ancestry, which prevents ancestry effects from entering into each SNP effect and being overweighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction. © 2015 WILEY PERIODICALS, INC.

  7. Explicit modeling of ancestry improves polygenic risk scores and BLUP prediction (United States)

    Chen, Chia-Yen; Han, Jiali; Hunter, David J.; Kraft, Peter; Price, Alkes L.


    Polygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color, tanning ability and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRS) and Best Linear Unbiased Prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R2 for hair color increased by 66% (0.0456 to 0.0755; pancestry, which prevents ancestry effects from entering into each SNP effect and being over-weighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction. PMID:25995153

  8. Crop model improvement reduces the uncertainty of the response to temperature of multi-model ensembles

    DEFF Research Database (Denmark)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold


    ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures >24 °C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME...... uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model...

  9. Problem solving based learning model with multiple representations to improve student's mental modelling ability on physics (United States)

    Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran


    Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7MMA) for 3MMA) for 0 ≤ x ≤ 3 score. The result shows that problem solving based learning model with multiple representations approach can be an alternative to be applied in improving students' MMA.

  10. More Similar Than Different

    DEFF Research Database (Denmark)

    Pedersen, Mogens Jin


    What role do employee features play into the success of different personnel management practices for serving high performance? Using data from a randomized survey experiment among 5,982 individuals of all ages, this article examines how gender conditions the compliance effects of different...... incentive treatments—each relating to the basic content of distinct types of personnel management practices. The findings show that males and females are more similar than different in terms of the incentive treatments’ effects: Significant average effects are found for three out of five incentive...

  11. Improving Patient Flow Utilizing a Collaborative Learning Model. (United States)

    Tibor, Laura C; Schultz, Stacy R; Cravath, Julie L; Rein, Russell R; Krecke, Karl N


    This initiative utilized a collaborative learning approach to increase knowledge and experience in process improvement and systems thinking while targeting improved patient flow in seven radiology modalities. Teams showed improvements in their project metrics and collectively streamlined the flow for 530 patients per day by improving patient lead time, wait time, and first case on-time start rates. In a post-project survey of 50 project team members, 82% stated they had more effective solutions as a result of the process improvement methodology, 84% stated they will be able to utilize the process improvement tools again in the future, and 98% would recommend participating in another project to a colleague.

  12. Making change last: applying the NHS institute for innovation and improvement sustainability model to healthcare improvement. (United States)

    Doyle, Cathal; Howe, Cathy; Woodcock, Thomas; Myron, Rowan; Phekoo, Karen; McNicholas, Chris; Saffer, Jessica; Bell, Derek


    The implementation of evidence-based treatments to deliver high-quality care is essential to meet the healthcare demands of aging populations. However, the sustainable application of recommended practice is difficult to achieve and variable outcomes well recognised. The NHS Institute for Innovation and Improvement Sustainability Model (SM) was designed to help healthcare teams recognise determinants of sustainability and take action to embed new practice in routine care. This article describes a formative evaluation of the application of the SM by the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care for Northwest London (CLAHRC NWL). Data from project teams' responses to the SM and formal reviews was used to assess acceptability of the SM and the extent to which it prompted teams to take action. Projects were classified as 'engaged,' 'partially engaged' and 'non-engaged.' Quarterly survey feedback data was used to explore reasons for variation in engagement. Score patterns were compared against formal review data and a 'diversity of opinion' measure was derived to assess response variance over time. Of the 19 teams, six were categorized as 'engaged,' six 'partially engaged,' and seven as 'non-engaged.' Twelve teams found the model acceptable to some extent. Diversity of opinion reduced over time. A minority of teams used the SM consistently to take action to promote sustainability but for the majority SM use was sporadic. Feedback from some team members indicates difficulty in understanding and applying the model and negative views regarding its usefulness. The SM is an important attempt to enable teams to systematically consider determinants of sustainability, provide timely data to assess progress, and prompt action to create conditions for sustained practice. Tools such as these need to be tested in healthcare settings to assess strengths and weaknesses and findings disseminated to aid development. This

  13. On-Line Core Thermal-Hydraulic Model Improvement

    Energy Technology Data Exchange (ETDEWEB)

    In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok; Shin, Chang Hwan; Hwang, Dae Hyun; Seo, Kyung Won


    The objective of this project is to implement a fast-running 4-channel based code CETOP-D in an advanced reactor core protection calculator system(RCOPS). The part required for the on-line calculation of DNBR were extracted from the source of the CETOP-D code based on analysis of the CETOP-D code. The CETOP-D code was revised to maintain the input and output variables which are the same as in CPC DNBR module. Since the DNBR module performs a complex calculation, it is divided into sub-modules per major calculation step. The functional design requirements for the DNBR module is documented and the values of the database(DB) constants were decided. This project also developed a Fortran module(BEST) of the RCOPS Fortran Simulator and a computer code RCOPS-SDNBR to independently calculate DNBR. A test was also conducted to verify the functional design and DB of thermal-hydraulic model which is necessary to calculate the DNBR on-line in RCOPS. The DNBR margin is expected to increase by 2%-3% once the CETOP-D code is used to calculate the RCOPS DNBR. It should be noted that the final DNBR margin improvement could be determined in the future based on overall uncertainty analysis of the RCOPS.

  14. Improving models to predict phenological responses to global change

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, Andrew D. [Harvard College, Cambridge, MA (United States)


    The term phenology describes both the seasonal rhythms of plants and animals, and the study of these rhythms. Plant phenological processes, including, for example, when leaves emerge in the spring and change color in the autumn, are highly responsive to variation in weather (e.g. a warm vs. cold spring) as well as longer-term changes in climate (e.g. warming trends and changes in the timing and amount of rainfall). We conducted a study to investigate the phenological response of northern peatland communities to global change. Field work was conducted at the SPRUCE experiment in northern Minnesota, where we installed 10 digital cameras. Imagery from the cameras is being used to track shifts in plant phenology driven by elevated carbon dioxide and elevated temperature in the different SPRUCE experimental treatments. Camera imagery and derived products (“greenness”) is being posted in near-real time on a publicly available web page ( The images will provide a permanent visual record of the progression of the experiment over the next 10 years. Integrated with other measurements collected as part of the SPRUCE program, this study is providing insight into the degree to which phenology may mediate future shifts in carbon uptake and storage by peatland ecosystems. In the future, these data will be used to develop improved models of vegetation phenology, which will be tested against ground observations collected by a local collaborator.

  15. Applying Quality Function Deployment Model in Burn Unit Service Improvement. (United States)

    Keshtkaran, Ali; Hashemi, Neda; Kharazmi, Erfan; Abbasi, Mehdi


    Quality function deployment (QFD) is one of the most effective quality design tools. This study applies QFD technique to improve the quality of the burn unit services in Ghotbedin Hospital in Shiraz, Iran. First, the patients' expectations of burn unit services and their priorities were determined through Delphi method. Thereafter, burn unit service specifications were determined through Delphi method. Further, the relationships between the patients' expectations and service specifications and also the relationships between service specifications were determined through an expert group's opinion. Last, the final importance scores of service specifications were calculated through simple additive weighting method. The findings show that burn unit patients have 40 expectations in six different areas. These expectations are in 16 priority levels. Burn units also have 45 service specifications in six different areas. There are four-level relationships between the patients' expectations and service specifications and four-level relationships between service specifications. The most important burn unit service specifications have been identified in this study. The QFD model developed in the study can be a general guideline for QFD planners and executives.

  16. Improving the representation of soluble iron in climate models

    Energy Technology Data Exchange (ETDEWEB)

    Mahowald, Natalie [Cornell Univ., Ithaca, NY (United States)


    Funding from this grant supported Rachel Sanza, Yan Zhang and partially Samuel Albani. Substantial progress has been made on inclusion of mineralogy, showing the quality of the simulations, and the impact on radiation in the CAM4 and CAM5 (Scanza et al., 2015). In addition, the elemental distribution has been evaluated (and partially supported by this grant) (Zhang et al., 2015), showing that using spatial distributions of mineralogy, improved resperentation of Fe, Ca and Al are possible, compared to the limited available data. A new intermediate complexity soluble iron scheme was implemented in the Bulk Aerosol Model (BAM), which was completed as part of Rachel Scanza’s PhD thesis. Currently Rachel is writing up at least two first author papers describing the general methods and comparison to observations (Scanza et al., in prep.), as well as papers describing the sensitivity to preindustrial conditions and interannual variability. This work lead to the lead PI being asked to write a commentary in Nature (Mahowald, 2013) and two review papers (Mahowald et al., 2014, Mahowald et al., submitted) and contributed to related papers (Albani et al., 2016, Albani et al., 2014, Albani et al., 2015).

  17. Developing a particle tracking surrogate model to improve inversion of ground water - Surface water models (United States)

    Cousquer, Yohann; Pryet, Alexandre; Atteia, Olivier; Ferré, Ty P. A.; Delbart, Célestine; Valois, Rémi; Dupuy, Alain


    The inverse problem of groundwater models is often ill-posed and model parameters are likely to be poorly constrained. Identifiability is improved if diverse data types are used for parameter estimation. However, some models, including detailed solute transport models, are further limited by prohibitive computation times. This often precludes the use of concentration data for parameter estimation, even if those data are available. In the case of surface water-groundwater (SW-GW) models, concentration data can provide SW-GW mixing ratios, which efficiently constrain the estimate of exchange flow, but are rarely used. We propose to reduce computational limits by simulating SW-GW exchange at a sink (well or drain) based on particle tracking under steady state flow conditions. Particle tracking is used to simulate advective transport. A comparison between the particle tracking surrogate model and an advective-dispersive model shows that dispersion can often be neglected when the mixing ratio is computed for a sink, allowing for use of the particle tracking surrogate model. The surrogate model was implemented to solve the inverse problem for a real SW-GW transport problem with heads and concentrations combined in a weighted hybrid objective function. The resulting inversion showed markedly reduced uncertainty in the transmissivity field compared to calibration on head data alone.

  18. Lithologic data improve plant species distribution models based on coarse-grained occurrence data

    Energy Technology Data Exchange (ETDEWEB)

    Gaston, A.; Soriano, C.; Gomez-Miguel, V.


    The aim of this study was to assess the improvement of plant species distribution models based on coarse-grained occurrence data when adding lithologic data to climatic models. The distributions of 40 woody plant species from continental Spain were modelled. A logistic regression model with climatic predictors was fitted for each species and compared to a second model with climatic and lithologic predictors. Improvements on model likelihood and prediction accuracy on validation sub samples were assessed, as well as the effect of calcicole calcifuge habit on model improvement. Climatic models had reasonable mean prediction accuracy, but adding lithologic data improved model likelihood in most cases and increased mean prediction accuracy. Therefore, we recommend utilizing lithologic data for species distribution models based on coarse-grained occurrence data. Our data did not support the hypothesis that calcicole calcifuge habit may explain model improvement when adding lithologic data to climatic models, but further research is needed. (Author) 31 refs.

  19. Intelligent Models Performance Improvement Based on Wavelet Algorithm and Logarithmic Transformations in Suspended Sediment Estimation

    Directory of Open Access Journals (Sweden)

    R. Hajiabadi


    data are applied to models training and one year is estimated by each model. Accuracy of models is evaluated by three indexes. These three indexes are mean absolute error (MAE, root mean squared error (RMSE and Nash-Sutcliffecoefficient (NS. Results and Discussion In order to suspended sediment load estimation by intelligent models, different input combination for model training evaluated. Then the best combination of input for each intelligent model is determined and preprocessing is done only for the best combination. Two logarithmic transforms, LN and LOG, considered to data transformation. Daubechies wavelet family is used as wavelet transforms. Results indicate that diagnosing causes Nash Sutcliffe criteria in ANN and GEPincreases 0.15 and 0.14, respectively. Furthermore, RMSE value has been reduced from 199.24 to 141.17 (mg/lit in ANN and from 234.84 to 193.89 (mg/lit in GEP. The impact of the logarithmic transformation approach on the ANN result improvement is similar to diagnosing approach. While the logarithmic transformation approach has an adverse impact on GEP. Nash Sutcliffe criteria, after Ln and Log transformations as preprocessing in GEP model, has been reduced from 0.57 to 0.31 and 0.21, respectively, and RMSE value increases from 234.84 to 298.41 (mg/lit and 318.72 (mg/lit respectively. Results show that data denoising by wavelet transform is effective for improvement of two intelligent model accuracy, while data transformation by logarithmic transformation causes improvement only in artificial neural network. Results of the ANN model reveal that data transformation by LN transfer is better than LOG transfer, however both transfer function cause improvement in ANN results. Also denoising by different wavelet transforms (Daubechies family indicates that in ANN models the wavelet function Db2 is more effective and causes more improvement while on GEP models the wavelet function Db1 (Harr is better. Conclusions: In the present study, two different

  20. Two-Dimensional Magnetotelluric Modelling of Ore Deposits: Improvements in Model Constraints by Inclusion of Borehole Measurements (United States)

    Kalscheuer, Thomas; Juhojuntti, Niklas; Vaittinen, Katri


    functions is used as the initial model for the inversion of the surface impedances, skin-effect transfer functions and vertical magnetic and electric transfer functions. For both synthetic examples, the inversion models resulting from surface and borehole measurements have higher similarity to the true models than models computed exclusively from surface measurements. However, the most prominent improvements were obtained for the first example, in which a deep small-sized ore body is more easily distinguished from a shallow main ore body penetrated by a borehole and the extent of the shadow zone (a conductive artefact) underneath the main conductor is strongly reduced. Formal model error and resolution analysis demonstrated that predominantly the skin-effect transfer functions improve model resolution at depth below the sensors and at distance of ˜ 300-1000 m laterally off a borehole, whereas the vertical electric and magnetic transfer functions improve resolution along the borehole and in its immediate vicinity. Furthermore, we studied the signal levels at depth and provided specifications of borehole magnetic and electric field sensors to be developed in a future project. Our results suggest that three-component SQUID and fluxgate magnetometers should be developed to facilitate borehole MT measurements at signal frequencies above and below 1 Hz, respectively.

  1. Animal models to improve our understanding and treatment of suicidal behavior. (United States)

    Gould, T D; Georgiou, P; Brenner, L A; Brundin, L; Can, A; Courtet, P; Donaldson, Z R; Dwivedi, Y; Guillaume, S; Gottesman, I I; Kanekar, S; Lowry, C A; Renshaw, P F; Rujescu, D; Smith, E G; Turecki, G; Zanos, P; Zarate, C A; Zunszain, P A; Postolache, T T


    Worldwide, suicide is a leading cause of death. Although a sizable proportion of deaths by suicide may be preventable, it is well documented that despite major governmental and international investments in research, education and clinical practice suicide rates have not diminished and are even increasing among several at-risk populations. Although nonhuman animals do not engage in suicidal behavior amenable to translational studies, we argue that animal model systems are necessary to investigate candidate endophenotypes of suicidal behavior and the neurobiology underlying these endophenotypes. Animal models are similarly a critical resource to help delineate treatment targets and pharmacological means to improve our ability to manage the risk of suicide. In particular, certain pathophysiological pathways to suicidal behavior, including stress and hypothalamic-pituitary-adrenal axis dysfunction, neurotransmitter system abnormalities, endocrine and neuroimmune changes, aggression, impulsivity and decision-making deficits, as well as the role of critical interactions between genetic and epigenetic factors, development and environmental risk factors can be modeled in laboratory animals. We broadly describe human biological findings, as well as protective effects of medications such as lithium, clozapine, and ketamine associated with modifying risk of engaging in suicidal behavior that are readily translatable to animal models. Endophenotypes of suicidal behavior, studied in animal models, are further useful for moving observed associations with harmful environmental factors (for example, childhood adversity, mechanical trauma aeroallergens, pathogens, inflammation triggers) from association to causation, and developing preventative strategies. Further study in animals will contribute to a more informed, comprehensive, accelerated and ultimately impactful suicide research portfolio.

  2. Improving Predictive Modeling in Pediatric Drug Development: Pharmacokinetics, Pharmacodynamics, and Mechanistic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Slikker, William; Young, John F.; Corley, Rick A.; Dorman, David C.; Conolly, Rory B.; Knudsen, Thomas; Erstad, Brian L.; Luecke, Richard H.; Faustman, Elaine M.; Timchalk, Chuck; Mattison, Donald R.


    A workshop was conducted on November 18?19, 2004, to address the issue of improving predictive models for drug delivery to developing humans. Although considerable progress has been made for adult humans, large gaps remain for predicting pharmacokinetic/pharmacodynamic (PK/PD) outcome in children because most adult models have not been tested during development. The goals of the meeting included a description of when, during development, infants/children become adultlike in handling drugs. The issue of incorporating the most recent advances into the predictive models was also addressed: both the use of imaging approaches and genomic information were considered. Disease state, as exemplified by obesity, was addressed as a modifier of drug pharmacokinetics and pharmacodynamics during development. Issues addressed in this workshop should be considered in the development of new predictive and mechanistic models of drug kinetics and dynamics in the developing human.

  3. Improved Regional Climate Model Simulation of Precipitation by a Dynamical Coupling to a Hydrology Model

    DEFF Research Database (Denmark)

    Larsen, Morten Andreas Dahl; Drews, Martin; Hesselbjerg Christensen, Jens

    convective precipitation systems. As a result climate model simulations let alone future projections of precipitation often exhibit substantial biases. Here we show that the dynamical coupling of a regional climate model to a detailed fully distributed hydrological model - including groundwater-, overland...... of local precipitation dynamics are seen for time scales of app. Seasonal duration and longer. We show that these results can be attributed to a more complete treatment of land surface feedbacks. The local scale effect on the atmosphere suggests that coupled high-resolution climate-hydrology models...... including a detailed 3D redistribution of sub- and land surface water have a significant potential for improving climate projections even diminishing the need for bias correction in climate-hydrology studies....

  4. Improved Formulations for Air-Surface Exchanges Related to National Security Needs: Dry Deposition Models

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, James G.


    The Department of Homeland Security and others rely on results from atmospheric dispersion models for threat evaluation, event management, and post-event analyses. The ability to simulate dry deposition rates is a crucial part of our emergency preparedness capabilities. Deposited materials pose potential hazards from radioactive shine, inhalation, and ingestion pathways. A reliable characterization of these potential exposures is critical for management and mitigation of these hazards. A review of the current status of dry deposition formulations used in these atmospheric dispersion models was conducted. The formulations for dry deposition of particulate materials from am event such as a radiological attack involving a Radiological Detonation Device (RDD) is considered. The results of this effort are applicable to current emergency preparedness capabilities such as are deployed in the Interagency Modeling and Atmospheric Assessment Center (IMAAC), other similar national/regional emergency response systems, and standalone emergency response models. The review concludes that dry deposition formulations need to consider the full range of particle sizes including: 1) the accumulation mode range (0.1 to 1 micron diameter) and its minimum in deposition velocity, 2) smaller particles (less than .01 micron diameter) deposited mainly by molecular diffusion, 3) 10 to 50 micron diameter particles deposited mainly by impaction and gravitational settling, and 4) larger particles (greater than 100 micron diameter) deposited mainly by gravitational settling. The effects of the local turbulence intensity, particle characteristics, and surface element properties must also be addressed in the formulations. Specific areas for improvements in the dry deposition formulations are 1) capability of simulating near-field dry deposition patterns, 2) capability of addressing the full range of potential particle properties, 3) incorporation of particle surface retention/rebound processes, and

  5. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change (United States)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.


    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  6. Synthetic Ground-Motion Simulation Using a Spatial Stochastic Model with Slip Self-Similarity: Toward Near-Source Ground-Motion Validation

    Directory of Open Access Journals (Sweden)

    Ya-Ting Lee


    Full Text Available Near-fault ground motion is a key to understanding the seismic hazard along a fault and is challenged by the ground motion prediction equation approach. This paper presents a developed stochastic-slip-scaling source model, a spatial stochastic model with slipped area scaling toward the ground motion simulation. We considered the near-fault ground motion of the 1999 Chi-Chi earthquake in Taiwan, the most massive near-fault disastrous earthquake, proposed by Ma et al. (2001 as a reference for validation. Three scenario source models including the developed stochastic-slip-scaling source model, mean-slip model and characteristic-asperity model were used for the near-fault ground motion examination. We simulated synthetic ground motion through 3D waveforms and validated these simulations using observed data and the ground-motion prediction equation (GMPE for Taiwan earthquakes. The mean slip and characteristic asperity scenario source models over-predicted the near-fault ground motion. The stochastic-slip-scaling model proposed in this paper is more accurately approximated to the near-fault motion compared with the GMPE and observations. This is the first study to incorporate slipped-area scaling in a stochastic slip model. The proposed model can generate scenario earthquakes for predicting ground motion.

  7. Batch-to-batch model improvement for cooling crystallization


    Forgione , Marco; Birpoutsoukis , Georgios; Bombois , Xavier; Mesbah , Ali; Daudey , Peter; Van Den Hof , Paul


    International audience; Two batch-to-batch model update strategies for model-based control of batch cooling crystallization are presented. In Iterative Learning Control, a nominal process model is adjusted by a non-parametric, additive correction term which depends on the difference between the measured output and the model prediction in the previous batch. In Iterative Identification Control, the uncertain model parameters are iteratively estimated using the measured batch data. Due to the d...

  8. Applying an orographic precipitation model to improve mass balance modeling of the Juneau Icefield, AK (United States)

    Roth, A. C.; Hock, R.; Schuler, T.; Bieniek, P.; Aschwanden, A.


    a distributed mass balance model for future mass balance modeling studies of the Juneau Icefield. The LT model has potential to be used in other regions in Alaska and elsewhere with strong orographic effects for improved glacier mass balance modeling and/or hydrological modeling.

  9. Why Is Improvement of Earth System Models so Elusive? Challenges and Strategies from Dust Aerosol Modeling (United States)

    Miller, Ronald L.; Garcia-Pando, Carlos Perez; Perlwitz, Jan; Ginoux, Paul


    Past decades have seen an accelerating increase in computing efficiency, while climate models are representing a rapidly widening set of physical processes. Yet simulations of some fundamental aspects of climate like precipitation or aerosol forcing remain highly uncertain and resistant to progress. Dust aerosol modeling of soil particles lofted by wind erosion has seen a similar conflict between increasing model sophistication and remaining uncertainty. Dust aerosols perturb the energy and water cycles by scattering radiation and acting as ice nuclei, while mediating atmospheric chemistry and marine photosynthesis (and thus the carbon cycle). These effects take place across scales from the dimensions of an ice crystal to the planetary-scale circulation that disperses dust far downwind of its parent soil. Representing this range leads to several modeling challenges. Should we limit complexity in our model, which consumes computer resources and inhibits interpretation? How do we decide if a process involving dust is worthy of inclusion within our model? Can we identify a minimal representation of a complex process that is efficient yet retains the physics relevant to climate? Answering these questions about the appropriate degree of representation is guided by model evaluation, which presents several more challenges. How do we proceed if the available observations do not directly constrain our process of interest? (This could result from competing processes that influence the observed variable and obscure the signature of our process of interest.) Examples will be presented from dust modeling, with lessons that might be more broadly applicable. The end result will either be clinical depression or there assuring promise of continued gainful employment as the community confronts these challenges.

  10. Molecular structure based property modeling: Development/ improvement of property models through a systematic property-data-model analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol Shivajirao; Sarup, Bent; Sin, Gürkan


    The objective of this work is to develop a method for performing property-data-model analysis so that efficient use of knowledge of properties could be made in the development/improvement of property prediction models. The method includes: (i) analysis of property data and its consistency check; ......, a method for selecting a minimum data-set for the parameter regression is also discussed for the cases where it is preferred to retain some data-points from the total data-set to test the reliability of predictions for validation purposes.......; (ii) selection of the most appropriate form of the property model; (iii) selection of the data-set for performing parameter regression and uncertainty analysis; and (iv) analysis of model prediction errors to take necessary corrective steps to improve the accuracy and the reliability of property...

  11. Impact of an improved WRF urban canopy model on diurnal air temperature simulation over northern Taiwan

    Directory of Open Access Journals (Sweden)

    C.-Y. Lin


    Full Text Available This study evaluates the impact of urbanization over northern Taiwan using the Weather Research and Forecasting (WRF Model coupled with the Noah land-surface model and a modified urban canopy model (WRF–UCM2D. In the original UCM coupled to WRF (WRF–UCM, when the land use in the model grid is identified as "urban", the urban fraction value is fixed. Similarly, the UCM assumes the distribution of anthropogenic heat (AH to be constant. This may not only lead to over- or underestimation of urban fraction and AH in urban and non-urban areas, but spatial variation also affects the model-estimated temperature. To overcome the abovementioned limitations and to improve the performance of the original UCM model, WRF–UCM is modified to consider the 2-D urban fraction and AH (WRF–UCM2D.The two models were found to have comparable temperature simulation performance for urban areas, but large differences in simulated results were observed for non-urban areas, especially at nighttime. WRF–UCM2D yielded a higher correlation coefficient (R2 than WRF–UCM (0.72 vs. 0.48, respectively, while bias and RMSE achieved by WRF–UCM2D were both significantly smaller than those attained by WRF–UCM (0.27 and 1.27 vs. 1.12 and 1.89, respectively. In other words, the improved model not only enhanced correlation but also reduced bias and RMSE for the nighttime data of non-urban areas. WRF–UCM2D performed much better than WRF–UCM at non-urban stations with a low urban fraction during nighttime. The improved simulation performance of WRF–UCM2D in non-urban areas is attributed to the energy exchange which enables efficient turbulence mixing at a low urban fraction. The result of this study has a crucial implication for assessing the impacts of urbanization on air quality and regional climate.

  12. Vacuum fused deposition modelling system to improve tensile ...

    African Journals Online (AJOL)

    This paper presents a possible solution to this problem by incorporating vacuum technology in FDM system to improve tensile strength of 3D printed specimens. In this study, a desktop FDM machine was placed and operated inside a low pressure vacuum chamber. The results obtained show an improvement of 12.83 % of ...

  13. A model of strategic product quality and process improvement incentives

    NARCIS (Netherlands)

    Veldman, Jasper; Gaalman, G.


    In many production firms it is common practice to financially reward managers for firm performance improvement. The use of financial incentives for improvement has been widely researched in several analytical and empirical studies. Literature has also addressed the strategic effect of incentives, in

  14. Teaching Improvement Model Designed with DEA Method and Management Matrix (United States)

    Montoneri, Bernard


    This study uses student evaluation of teachers to design a teaching improvement matrix based on teaching efficiency and performance by combining management matrix and data envelopment analysis. This matrix is designed to formulate suggestions to improve teaching. The research sample consists of 42 classes of freshmen following a course of English…

  15. A model of strategic product quality and process improvement incentives

    NARCIS (Netherlands)

    Veldman, Jasper; Gaalman, Gerard

    In many production firms it is common practice to financially reward managers for firm performance improvement. The use of financial incentives for improvement has been widely researched in several analytical and empirical studies. Literature has also addressed the strategic effect of incentives, in

  16. An improved Corten-Dolan's model based on damage and stress state effects

    International Nuclear Information System (INIS)

    Gao, Huiying; Huang, Hong Zhong; Lv, Zhiqiang; Zuo, Fang Jun; Wang, Hai Kun


    The value of exponent d in Corten-Dolan's model is generally considered to be a constant. Nonetheless, the results predicted on the basis of this statement deviate significantly from the real values. In consideration of the effects of damage and stress state on fatigue life prediction, Corten-Dolan's model is improved by redefining the exponent d used in the traditional model. The improved model performs better than the traditional one with respect to the demonstration of a fatigue failure mechanism. Predictions of fatigue life on the basis of investigations into three metallic specimens indicate that the errors caused by the improved model are significantly smaller than those induced by the traditional model. Meanwhile, predictions derived according to the improved model fall into a narrower dispersion zone than those made as per Miner's rule and the traditional model. This finding suggests that the proposed model improves the life prediction accuracy of the other two models. The predictions obtained using the improved Corten-Dolan's model differ slightly from those derived according to a model proposed in previous literature; a few life predictions obtained on the basis of the former are more accurate than those derived according to the latter. Therefore, the improved model proposed in this paper is proven to be rational and reliable given the proven validity of the existing model. Therefore, the improved model can be feasibly and credibly applied to damage accumulation and fatigue life prediction to some extent.

  17. Hydrological improvements for nutrient and pollutant emission modeling in large scale catchments (United States)

    Höllering, S.; Ihringer, J.


    An estimation of emissions and loads of nutrients and pollutants into European water bodies with as much accuracy as possible depends largely on the knowledge about the spatially and temporally distributed hydrological runoff patterns. An improved hydrological water balance model for the pollutant emission model MoRE (Modeling of Regionalized Emissions) (IWG, 2011) has been introduced, that can form an adequate basis to simulate discharge in a hydrologically differentiated, land-use based way to subsequently provide the required distributed discharge components. First of all the hydrological model had to comply both with requirements of space and time in order to calculate sufficiently precise the water balance on the catchment scale spatially distributed in sub-catchments and with a higher temporal resolution. Aiming to reproduce seasonal dynamics and the characteristic hydrological regimes of river catchments a daily (instead of a yearly) time increment was applied allowing for a more process oriented simulation of discharge dynamics, volume and therefore water balance. The enhancement of the hydrological model became also necessary to potentially account for the hydrological functioning of catchments in regard to scenarios of e.g. a changing climate or alterations of land use. As a deterministic, partly physically based, conceptual hydrological watershed and water balance model the Precipitation Runoff Modeling System (PRMS) (USGS, 2009) was selected to improve the hydrological input for MoRE. In PRMS the spatial discretization is implemented with sub-catchments and so called hydrologic response units (HRUs) which are the hydrotropic, distributed, finite modeling entities each having a homogeneous runoff reaction due to hydro-meteorological events. Spatial structures and heterogeneities in sub-catchments e.g. urbanity, land use and soil types were identified to derive hydrological similarities and classify in different urban and rural HRUs. In this way the

  18. Highly Adoptable Improvement: A Practical Model and Toolkit to Address Adoptability and Sustainability of Quality Improvement Initiatives. (United States)

    Hayes, Christopher William; Goldmann, Don


    Failure to consider the impact of change on health care providers is a barrier to success. Initiatives that increase workload and have low perceived value are less likely to be adopted. A practical model and supporting tools were developed on the basis of existing theories to help quality improvement (QI) programs design more adoptable approaches. Models and theories from the diffusion of innovation and work stress literature were reviewed, and key-informant interviews and site visits were conducted to develop a draft Highly Adoptable Improvement (HAI) Model. A list of candidate factors considered for inclusion in the draft model was presented to an expert panel. A modified Delphi process was used to narrow the list of factors into main themes and refine the model. The resulting model and supporting tools were pilot tested by 16 improvement advisors for face validity and usability. The HAI Model depicts how workload and perceived value influence adoptability of QI initiatives. The supporting tools include an assessment guide and suggested actions that QI programs can use to help design interventions that are likely to be adopted. Improvement advisors reported good face validity and usability and found that the model and the supporting tools helped address key issues related to adoption and reported that they would continue to use them. The HAI Model addresses important issues regarding workload and perceived value of improvement initiatives. Pilot testing suggests that the model and supporting tools are helpful and practical in guiding design and implementation of adoptable and sustainable QI interventions. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  19. Improving traffic signal management and operations : a basic service model. (United States)


    This report provides a guide for achieving a basic service model for traffic signal management and : operations. The basic service model is based on simply stated and defensible operational objectives : that consider the staffing level, expertise and...

  20. Improved analyses using function datasets and statistical modeling (United States)

    John S. Hogland; Nathaniel M. Anderson


    Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...

  1. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  2. Improvement of TNO type trailing edge noise models

    DEFF Research Database (Denmark)

    Fischer, Andreas; Bertagnolio, Franck; Aagaard Madsen, Helge


    . It is computed by solving a Poisson equation which includes flow turbulence cross correlation terms. Previously published TNO type models used the assumption of Blake to simplify the Poisson equation. This paper shows that the simplification should not be used. We present a new model which fully models...

  3. Improvement of TNO type trailing edge noise models

    DEFF Research Database (Denmark)

    Fischer, Andreas; Bertagnolio, Franck; Aagaard Madsen, Helge


    . It is computed by solving a Poisson equation which includes flow turbulence cross correlation terms. Previously published TNO type models used the assumption of Blake to simplify the Poisson equation. This paper shows that the simplification should not be used. We present a new model which fully models...

  4. Improving models for describing phosphorus cycling in agricultural soils (United States)

    The mobility of phosphorus in the environment is controlled to a large extent by its sorption to soil. Therefore, an important component of all P loss models is how the model describes the biogeochemical processes governing P sorption and desorption to soils. The most common approach to modeling P c...

  5. Modeling of high gain helical antenna for improved performance ...

    African Journals Online (AJOL)

    The modeling of High Gain Helical Antenna structure is subdivided into three sections : introduction of helical structures ,Numerical analysis, modeling and simulation based on the parameters of helical antenna. The basic foundation software for the research paper is Matlab technical computing software, the modeling were ...

  6. Improved Analysis of Earth System Models and Observations using Simple Climate Models (United States)

    Nadiga, Balasubramanya; Urban, Nathan


    First-principles-based Earth System Models (ESMs) are central to both improving our understanding of the climate system and developing climate projections. Nevertheless, given the diversity of climate simulated by the various ESMs and the intense computational burden associated with running such models, simple climate models (SCMs) are key to being able to compare ESMs and the climates they simulate in a dynamically meaningful fashion. We present some preliminary work along these lines. In an application of an SCM to compare different ESMs and observations, we demonstrate a deficiency in the commonly-used upwelling-diffusion (UD) energy balance model (EBM). When we consider the vertical distribution of ocean heat uptake, the lack of representation of processes such as deep water formation and subduction in the UD-EBM precludes a reasonable representation of the vertical distribution of heat uptake in that model. We then demonstrate how the problem can be remedied by introducing a parameterization of such processes in the UD-EBM. With further development, it is anticipated that this approach of ESM inter-comparison using simple physics-based models will lead to further insights into aspects of the climate response such as its stability and sensitivity, uncertainty and predictability, and underlying flow structure and topology.

  7. Randomization to a low-carbohydrate diet advice improves health related quality of life compared with a low-fat diet at similar weight-loss in Type 2 diabetes mellitus. (United States)

    Guldbrand, H; Lindström, T; Dizdar, B; Bunjaku, B; Östgren, C J; Nystrom, F H; Bachrach-Lindström, M


    To compare the effects on health-related quality of life (HRQoL) of a 2-year intervention with a low-fat diet (LFD) or a low-carbohydrate diet (LCD) based on four group-meetings to achieve compliance. To describe different aspects of taking part in the intervention following the LFD or LCD. Prospective, randomized trial of 61 adults with Type 2 diabetes mellitus. The SF-36 questionnaire was used at baseline, 6, 12 and 24 months. Patients on LFD aimed for 55-60 energy percent (E%) and those on LCD for 20 E% from carbohydrates. The patients were interviewed about their experiences of the intervention. Mean body-mass-index was 32.7 ± 5.4 kg/m(2) at baseline. Weight-loss did not differ between groups and was maximal at 6 months, LFD: -3.99 ± 4.1 kg, LCD: -4.31 ± 3.6 kg (pdiet groups while improvements in HRQoL only occurred after one year during treatment with LCD. No changes of HRQoL occurred in the LFD group in spite of a similar reduction in body weight. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  8. New similarity search based glioma grading

    Energy Technology Data Exchange (ETDEWEB)

    Haegler, Katrin; Brueckmann, Hartmut; Linn, Jennifer [Ludwig-Maximilians-University of Munich, Department of Neuroradiology, Munich (Germany); Wiesmann, Martin; Freiherr, Jessica [RWTH Aachen University, Department of Neuroradiology, Aachen (Germany); Boehm, Christian [Ludwig-Maximilians-University of Munich, Department of Computer Science, Munich (Germany); Schnell, Oliver; Tonn, Joerg-Christian [Ludwig-Maximilians-University of Munich, Department of Neurosurgery, Munich (Germany)


    MR-based differentiation between low- and high-grade gliomas is predominately based on contrast-enhanced T1-weighted images (CE-T1w). However, functional MR sequences as perfusion- and diffusion-weighted sequences can provide additional information on tumor grade. Here, we tested the potential of a recently developed similarity search based method that integrates information of CE-T1w and perfusion maps for non-invasive MR-based glioma grading. We prospectively included 37 untreated glioma patients (23 grade I/II, 14 grade III gliomas), in whom 3T MRI with FLAIR, pre- and post-contrast T1-weighted, and perfusion sequences was performed. Cerebral blood volume, cerebral blood flow, and mean transit time maps as well as CE-T1w images were used as input for the similarity search. Data sets were preprocessed and converted to four-dimensional Gaussian Mixture Models that considered correlations between the different MR sequences. For each patient, a so-called tumor feature vector (= probability-based classifier) was defined and used for grading. Biopsy was used as gold standard, and similarity based grading was compared to grading solely based on CE-T1w. Accuracy, sensitivity, and specificity of pure CE-T1w based glioma grading were 64.9%, 78.6%, and 56.5%, respectively. Similarity search based tumor grading allowed differentiation between low-grade (I or II) and high-grade (III) gliomas with an accuracy, sensitivity, and specificity of 83.8%, 78.6%, and 87.0%. Our findings indicate that integration of perfusion parameters and CE-T1w information in a semi-automatic similarity search based analysis improves the potential of MR-based glioma grading compared to CE-T1w data alone. (orig.)

  9. New similarity search based glioma grading. (United States)

    Haegler, Katrin; Wiesmann, Martin; Böhm, Christian; Freiherr, Jessica; Schnell, Oliver; Brückmann, Hartmut; Tonn, Jörg-Christian; Linn, Jennifer


    MR-based differentiation between low- and high-grade gliomas is predominately based on contrast-enhanced T1-weighted images (CE-T1w). However, functional MR sequences as perfusion- and diffusion-weighted sequences can provide additional information on tumor grade. Here, we tested the potential of a recently developed similarity search based method that integrates information of CE-T1w and perfusion maps for non-invasive MR-based glioma grading. We prospectively included 37 untreated glioma patients (23 grade I/II, 14 grade III gliomas), in whom 3T MRI with FLAIR, pre- and post-contrast T1-weighted, and perfusion sequences was performed. Cerebral blood volume, cerebral blood flow, and mean transit time maps as well as CE-T1w images were used as input for the similarity search. Data sets were preprocessed and converted to four-dimensional Gaussian Mixture Models that considered correlations between the different MR sequences. For each patient, a so-called tumor feature vector (= probability-based classifier) was defined and used for grading. Biopsy was used as gold standard, and similarity based grading was compared to grading solely based on CE-T1w. Accuracy, sensitivity, and specificity of pure CE-T1w based glioma grading were 64.9%, 78.6%, and 56.5%, respectively. Similarity search based tumor grading allowed differentiation between low-grade (I or II) and high-grade (III) gliomas with an accuracy, sensitivity, and specificity of 83.8%, 78.6%, and 87.0%. Our findings indicate that integration of perfusion parameters and CE-T1w information in a semi-automatic similarity search based analysis improves the potential of MR-based glioma grading compared to CE-T1w data alone.

  10. Meaningful Use in Chronic Care: Improved Diabetes Outcomes Using a Primary Care Extension Center Model. (United States)

    Cykert, Samuel; Lefebvre, Ann; Bacon, Thomas; Newton, Warren

    The effect of practice facilitation that provides onsite quality improvement (QI) and electronic health record (EHR) coaching on chronic care outcomes is unclear. This study evaluates the effectiveness of such a program-similar to an agricultural extension center model-that provides these services. Through the Health Information Technology for Economic and Clinical Health (HITECH) portion of the American Recovery and Reinvestment Act, the North Carolina Area Health Education Centers program became the Regional Extension Center for Health Information Technology (REC) for North Carolina. The REC program provides onsite technical assistance to help small primary care practices achieve meaningful use of certified EHRs. While pursuing meaningful use functionality, practices were also offered complementary onsite advice regarding QI issues. We followed the first 50 primary care practices that utilized both EHR and QI advice targeting diabetes care. The achievement of meaningful use of certified EHRs and performance of QI with onsite practice facilitation showed an absolute improvement of 19% in the proportion of patients who achieved excellent diabetes control (hemoglobin A1c 9%) fell steeply in these practices. No control group was available for comparison. Practice facilitation that provided EHR and QI coaching support showed important improvements in diabetes outcomes in practices that achieved meaningful use of their EHR systems. This approach holds promise as a way to help small primary care practices achieve excellent patient outcomes. ©2016 by the North Carolina Institute of Medicine and The Duke Endowment. All rights reserved.

  11. Classification of Flying Insects with high performance using improved DTW algorithm based on hidden Markov model

    Directory of Open Access Journals (Sweden)

    S. Arif Abdul Rahuman

    Full Text Available ABSTRACT Insects play significant role in the human life. And insects pollinate major food crops consumed in the world. Insect pests consume and destroy major crops in the world. Hence to have control over the disease and pests, researches are going on in the area of entomology using chemical, biological and mechanical approaches. The data relevant to the flying insects often changes over time, and classification of such data is a central issue. And such time series mining tasks along with classification is critical nowadays. Most time series data mining algorithms use similarity search and hence time taken for similarity search is the bottleneck and it does not produce accurate results and also produces very poor performance. In this paper, a novel classification method that is based on the dynamic time warping (DTW algorithm is proposed. The dynamic time warping algorithm is deterministic and lacks in modeling stochastic signals. The dynamic time warping (DTW algorithm is improved by implementing a nonlinear median filtering (NMF. Recognition accuracy of conventional DTW algorithms is less than that of the hidden Markov model (HMM by same voice activity detection (VAD and noise-reduction. With running spectrum filtering (RSF and dynamic range adjustment (DRA. NMF seek the median distance of every reference of time series data and the recognition accuracy is much improved. In this research work, optical sensors are used to record the sound of insect flight, with invariance to interference from ambient sounds. The implementation of our tool includes two parts, an optical sensor to record the "sound" of insect flight, and a software that leverages on the sensor information, to automatically detect and identify flying insects.

  12. Improving Evolutionary Models for Mitochondrial Protein Data with Site-Class Specific Amino Acid Exchangeability Matrices (United States)

    Dunn, Katherine A.; Jiang, Wenyi; Field, Christopher; Bielawski, Joseph P.


    Adequate modeling of mitochondrial sequence evolution is an essential component of mitochondrial phylogenomics (comparative mitogenomics). There is wide recognition within the field that lineage-specific aspects of mitochondrial evolution should be accommodated through lineage-specific amino-acid exchangeability matrices (e.g., mtMam for mammalian data). However, such a matrix must be applied to all sites and this implies that all sites are subject to the same, or largely similar, evolutionary constraints. This assumption is unjustified. Indeed, substantial differences are expected to arise from three-dimensional structures that impose different physiochemical environments on individual amino acid residues. The objectives of this paper are (1) to investigate the extent to which amino acid evolution varies among sites of mitochondrial proteins, and (2) to assess the potential benefits of explicitly modeling such variability. To achieve this, we developed a novel method for partitioning sites based on amino acid physiochemical properties. We apply this method to two datasets derived from complete mitochondrial genomes of mammals and fish, and use maximum likelihood to estimate amino acid exchangeabilities for the different groups of sites. Using this approach we identified large groups of sites evolving under unique physiochemical constraints. Estimates of amino acid exchangeabilities differed significantly among such groups. Moreover, we found that joint estimates of amino acid exchangeabilities do not adequately represent the natural variability in evolutionary processes among sites of mitochondrial proteins. Significant improvements in likelihood are obtained when the new matrices are employed. We also find that maximum likelihood estimates of branch lengths can be strongly impacted. We provide sets of matrices suitable for groups of sites subject to similar physiochemical constraints, and discuss how they might be used to analyze real data. We also discuss how

  13. Improving evolutionary models for mitochondrial protein data with site-class specific amino acid exchangeability matrices.

    Directory of Open Access Journals (Sweden)

    Katherine A Dunn

    Full Text Available Adequate modeling of mitochondrial sequence evolution is an essential component of mitochondrial phylogenomics (comparative mitogenomics. There is wide recognition within the field that lineage-specific aspects of mitochondrial evolution should be accommodated through lineage-specific amino-acid exchangeability matrices (e.g., mtMam for mammalian data. However, such a matrix must be applied to all sites and this implies that all sites are subject to the same, or largely similar, evolutionary constraints. This assumption is unjustified. Indeed, substantial differences are expected to arise from three-dimensional structures that impose different physiochemical environments on individual amino acid residues. The objectives of this paper are (1 to investigate the extent to which amino acid evolution varies among sites of mitochondrial proteins, and (2 to assess the potential benefits of explicitly modeling such variability. To achieve this, we developed a novel method for partitioning sites based on amino acid physiochemical properties. We apply this method to two datasets derived from complete mitochondrial genomes of mammals and fish, and use maximum likelihood to estimate amino acid exchangeabilities for the different groups of sites. Using this approach we identified large groups of sites evolving under unique physiochemical constraints. Estimates of amino acid exchangeabilities differed significantly among such groups. Moreover, we found that joint estimates of amino acid exchangeabilities do not adequately represent the natural variability in evolutionary processes among sites of mitochondrial proteins. Significant improvements in likelihood are obtained when the new matrices are employed. We also find that maximum likelihood estimates of branch lengths can be strongly impacted. We provide sets of matrices suitable for groups of sites subject to similar physiochemical constraints, and discuss how they might be used to analyze real data. We


    Directory of Open Access Journals (Sweden)

    Ivan Mihajlović


    Full Text Available This paper presents the modeling procedure of one real technological system. In this study, thecopper extraction from the copper flotation waste generated at the Bor Copper Mine (Serbia, werethe object of modeling. Sufficient data base for statistical modeling was constructed using theorthogonal factorial design of the experiments. Mathematical model of investigated system wasdeveloped using the combination of linear and multiple linear statistical analysis approach. Thepurpose of such a model is obtaining optimal states of the system that enable efficient operationsmanagement. Besides technological and economical, ecological parameters of the process wereconsidered as crucial input variables.

  15. Diffusion-like recommendation with enhanced similarity of objects (United States)

    An, Ya-Hui; Dong, Qiang; Sun, Chong-Jing; Nie, Da-Cheng; Fu, Yan


    In the last decade, diversity and accuracy have been regarded as two important measures in evaluating a recommendation model. However, a clear concern is that a model focusing excessively on one measure will put the other one at risk, thus it is not easy to greatly improve diversity and accuracy simultaneously. In this paper, we propose to enhance the Resource-Allocation (RA) similarity in resource transfer equations of diffusion-like models, by giving a tunable exponent to the RA similarity, and traversing the value of this exponent to achieve the optimal recommendation results. In this way, we can increase the recommendation scores (allocated resource) of many unpopular objects. Experiments on three benchmark data sets, MovieLens, Netflix and RateYourMusic show that the modified models can yield remarkable performance improvement compared with the original ones.

  16. Topological Model on the Inductive Effect in Alkyl Halides Using Local Quantum Similarity and Reactivity Descriptors in the Density Functional Theory

    Directory of Open Access Journals (Sweden)

    Alejandro Morales-Bayuelo


    Full Text Available We present a topological analysis to the inductive effect through steric and electrostatic scales of quantitative convergence. Using the molecular similarity field based in the local guantum similarity (LQS with the Topo-Geometrical Superposition Algorithm (TGSA alignment method and the chemical reactivity in the density function theory (DFT context, all calculations were carried out with Amsterdam Density Functional (ADF code, using the gradient generalized approximation (GGA and local exchange correlations PW91, in order to characterize the electronic effect by atomic size in the halogens group using a standard Slater-type-orbital basis set. In addition, in this study we introduced news molecular bonding relationships in the inductive effect and the nature of the polar character in the C–H bond taking into account the global and local reactivity descriptors such as chemical potential, hardness, electrophilicity, and Fukui functions, respectively. These descriptors are used to find new alternative considerations on the inductive effect, unlike to the binding energy and dipole moment performed in the traditional organic chemical.

  17. Improvement in genetic evaluation of female fertility in dairy cattle using multiple-trait models including milk production traits

    DEFF Research Database (Denmark)

    Sun, C; Madsen, P; Lund, M S


    This study investigated the improvement in genetic evaluation of fertility traits by using production traits as secondary traits (MILK = 305-d milk yield, FAT = 305-d fat yield, and PROT = 305-d protein yield). Data including 471,742 records from first lactations of Denmark Holstein cows, covering...... (DATAC1, which only contained the first crop daughters) for proven bulls. In addition, the superiority of the models was evaluated by expected reliability of EBV, calculated from the prediction error variance of EBV. Based on these criteria, the models combining milk production traits showed better model...... stability and predictive ability than single-trait models for all the fertility traits, except for nonreturn rate within 56 d after first service. The stability and predictive ability for the model including MILK or PROT were similar to the model including all 3 milk production traits and better than...

  18. A Modeling Framework for Improved Agricultural Water Supply Forecasting (United States)

    Leavesley, G. H.; David, O.; Garen, D. C.; Lea, J.; Marron, J. K.; Pagano, T. C.; Perkins, T. R.; Strobel, M. L.


    The National Water and Climate Center (NWCC) of the USDA Natural Resources Conservation Service is moving to augment seasonal, regression-equation based water supply forecasts with distributed-parameter, physical process models enabling daily, weekly, and seasonal forecasting using an Ensemble Streamflow Prediction (ESP) methodology. This effort involves the development and implementation of a modeling framework, and associated models and tools, to provide timely forecasts for use by the agricultural community in the western United States where snowmelt is a major source of water supply. The framework selected to support this integration is the USDA Object Modeling System (OMS). OMS is a Java-based modular modeling framework for model development, testing, and deployment. It consists of a library of stand-alone science, control, and database components (modules), and a means to assemble selected components into a modeling package that is customized to the problem, data constraints, and scale of application. The framework is supported by utility modules that provide a variety of data management, land unit delineation and parameterization, sensitivity analysis, calibration, statistical analysis, and visualization capabilities. OMS uses an open source software approach to enable all members of the scientific community to collaboratively work on addressing the many complex issues associated with the design, development, and application of distributed hydrological and environmental models. A long-term goal in the development of these water-supply forecasting capabilities is the implementation of an ensemble modeling approach. This would provide forecasts using the results of multiple hydrologic models run on each basin.

  19. Global soil carbon projections are improved by modelling microbial processes (United States)

    Wieder, William R.; Bonan, Gordon B.; Allison, Steven D.


    Society relies on Earth system models (ESMs) to project future climate and carbon (C) cycle feedbacks. However, the soil C response to climate change is highly uncertain in these models and they omit key biogeochemical mechanisms. Specifically, the traditional approach in ESMs lacks direct microbial control over soil C dynamics. Thus, we tested a new model that explicitly represents microbial mechanisms of soil C cycling on the global scale. Compared with traditional models, the microbial model simulates soil C pools that more closely match contemporary observations. It also projects a much wider range of soil C responses to climate change over the twenty-first century. Global soils accumulate C if microbial growth efficiency declines with warming in the microbial model. If growth efficiency adapts to warming, the microbial model projects large soil C losses. By comparison, traditional models project modest soil C losses with global warming. Microbes also change the soil response to increased C inputs, as might occur with CO2 or nutrient fertilization. In the microbial model, microbes consume these additional inputs; whereas in traditional models, additional inputs lead to C storage. Our results indicate that ESMs should simulate microbial physiology to more accurately project climate change feedbacks.

  20. Improving Stochastic Modelling of Daily Rainfall Using the ENSO Index: Model Development and Application in Chile

    Directory of Open Access Journals (Sweden)

    Diego Urdiales


    Full Text Available Stochastic weather simulation, or weather generators (WGs, have gained a wide acceptance and been used for a variety of purposes, including climate change studies and the evaluation of climate variability and uncertainty effects. The two major challenges in WGs are improving the estimation of interannual variability and reducing overdispersion in the synthetic series of simulated weather. The objective of this work is to develop a WG model of daily rainfall, incorporating a covariable that accounts for interannual variability, and apply it in three climate regions (arid, Mediterranean, and temperate of Chile. Precipitation occurrence was modeled using a two-stage, first-order Markov chain, whose parameters are fitted with a generalized lineal model (GLM using a logistic function. This function considers monthly values of the observed Sea Surface Temperature Anomalies of the Region 3.4 of El Niño-Southern Oscillation (ENSO index as a covariable. Precipitation intensity was simulated with a mixed exponential distribution, fitted using a maximum likelihood approach. The stochastic simulation shows that the application of the approach to Mediterranean and arid climates largely eliminates the overdispersion problem, resulting in a much improved interannual variability in the simulated values.

  1. Microstructure Characterization and Modeling for Improved Electrode Design

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Kandler A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Usseglio Viretta, Francois L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Graf, Peter A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Santhanagopalan, Shriram [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Pesaran, Ahmad A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Yao, Koffi (Pierre) [Argonne National Laboratory; ; Dees, Dennis [Argonne National Laboratory; Jansen, Andy [Argonne National Laboratory; Mukherjee, Partha [Texas A& M University; Mistry, Aashutosh [Texas A& M University; Verma, Ankit [Texas A& M University


    This presentation describes research work led by NREL with team members from Argonne National Laboratory and Texas A&M University in microstructure analysis, modeling and validation under DOE's Computer-Aided Engineering of Batteries (CAEBAT) program. The goal of the project is to close the gaps between CAEBAT models and materials research by creating predictive models that can be used for electrode design.

  2. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS (United States)


    wind-and thermohaline -forced isopycnic coordinate model of the North Atlantic. J. Phys. Oceanogr. 22, 1486–1505. Bleck, R., 2002. An oceanic general... circulation model framed in hybrid isopycnic-Cartesian coordinates. Ocean Modell. 4, 55–88. Buijsman, M.C., Kanarska, Y., McWilliams, J.C., 2010...continental margin. Cont. Shelf Res. 24 (6), 693–720. Nakayama, K. and Imberger, J. 2010 Residual circulation due to internal waves shoaling on a slope

  3. An improved lake model for climate simulations: Model structure, evaluation, and sensitivity analyses in CESM1

    Directory of Open Access Journals (Sweden)

    Zachary Subin


    Full Text Available Lakes can influence regional climate, yet most general circulation models have, at best, simple and largely untested representations of lakes. We developed the Lake, Ice, Snow, and Sediment Simulator(LISSS for inclusion in the land-surface component (CLM4 of an earth system model (CESM1. The existing CLM4 lake modelperformed poorly at all sites tested; for temperate lakes, summer surface water temperature predictions were 10–25uC lower than observations. CLM4-LISSS modifies the existing model by including (1 a treatment of snow; (2 freezing, melting, and ice physics; (3 a sediment thermal submodel; (4 spatially variable prescribed lakedepth; (5 improved parameterizations of lake surface properties; (6 increased mixing under ice and in deep lakes; and (7 correction of previous errors. We evaluated the lake model predictions of water temperature and surface fluxes at three small temperate and boreal lakes where extensive observational data was available. We alsoevaluated the predicted water temperature and/or ice and snow thicknesses for ten other lakes where less comprehensive forcing observations were available. CLM4-LISSS performed very well compared to observations for shallow to medium-depth small lakes. For large, deep lakes, the under-prediction of mixing was improved by increasing the lake eddy diffusivity by a factor of 10, consistent with previouspublished analyses. Surface temperature and surface flux predictions were improved when the aerodynamic roughness lengths were calculated as a function of friction velocity, rather than using a constant value of 1 mm or greater. We evaluated the sensitivity of surface energy fluxes to modeled lake processes and parameters. Largechanges in monthly-averaged surface fluxes (up to 30 W m22 were found when excluding snow insulation or phase change physics and when varying the opacity, depth, albedo of melting lake ice, and mixing strength across ranges commonly found in real lakes. Typical

  4. Improving wave forecasting by integrating ensemble modelling and machine learning (United States)

    O'Donncha, F.; Zhang, Y.; James, S. C.


    Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.

  5. An improved steam generator model for the SASSYS code

    International Nuclear Information System (INIS)

    Pizzica, P.A.


    A new steam generator model has been developed for the SASSYS computer code, which analyzes accident conditions in a liquid-metal-cooled fast reactor. It has been incorporated into the new SASSYS balance-of-plant model, but it can also function as a stand-alone model. The model provides a full solution of the steady-state condition before the transient calculation begins for given sodium and water flow rates, inlet and outlet sodium temperatures, and inlet enthalpy and region lengths on the water side

  6. Local indices for similarity analysis (LISA)-a 3D-QSAR formalism based on local molecular similarity. (United States)

    Verma, Jitender; Malde, Alpeshkumar; Khedkar, Santosh; Iyer, Radhakrishnan; Coutinho, Evans


    A simple quantitative structure activity relationship (QSAR) approach termed local indices for similarity analysis (LISA) has been developed. In this technique, the global molecular similarity is broken up as local similarity at each grid point surrounding the molecules and is used as a QSAR descriptor. In this way, a view of the molecular sites permitting favorable and rational changes to enhance activity is obtained. The local similarity index, calculated on the basis of Petke's formula, segregates the regions into "equivalent", "favored similar", and "disfavored similar" (alternatively "favored dissimilar") potentials with respect to a reference molecule in the data set. The method has been tested on three large and diverse data sets-thrombin, glycogen phosphorylase b, and thermolysin inhibitors. The QSAR models derived using genetic algorithm incorporated partial least square analysis statistics are found to be comparable to the ones obtained by the standard three-dimensional (3D)-QSAR methods, such as comparative molecular field analysis and comparative molecular similarity indices analysis. The graphical interpretation of the LISA models is straightforward, and the outcome of the models corroborates well with literature data. The LISA models give insight into the binding mechanisms of the ligand with the enzyme and allow fine-tuning of the molecules at the local level to improve their activity.

  7. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw


    Full Text Available Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW in Interior Alaska: one nearly permafrost-free (LowP sub-basin and one permafrost-dominated (HighP sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC mesoscale hydrological model to simulate runoff, evapotranspiration (ET, and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub

  8. Delamination Modeling of Composites for Improved Crash Analysis (United States)

    Fleming, David C.


    Finite element crash modeling of composite structures is limited by the inability of current commercial crash codes to accurately model delamination growth. Efforts are made to implement and assess delamination modeling techniques using a current finite element crash code, MSC/DYTRAN. Three methods are evaluated, including a straightforward method based on monitoring forces in elements or constraints representing an interface; a cohesive fracture model proposed in the literature; and the virtual crack closure technique commonly used in fracture mechanics. Results are compared with dynamic double cantilever beam test data from the litera