WorldWideScience

Sample records for parallel evolution strategy

  1. Contemporary evolution strategies

    CERN Document Server

    Bäck, Thomas; Krause, Peter

    2013-01-01

    Evolution strategies have more than 50 years of history in the field of evolutionary computation. Since the early 1990s, many algorithmic variations of evolution strategies have been developed, characterized by the fact that they use the so-called derandomization concept for strategy parameter adaptation. Most importantly, the covariance matrix adaptation strategy (CMA-ES) and its successors are the key representatives of this group of contemporary evolution strategies. This book provides an overview of the key algorithm developments between 1990 and 2012, including brief descriptions of the a

  2. New Parallel Algorithms for Landscape Evolution Model

    Science.gov (United States)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  3. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  4. Design strategies for irregularly adapting parallel applications

    International Nuclear Information System (INIS)

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Sing, Jaswinder Pal

    2000-01-01

    Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance of dynamically adapting computations. In this work, we examine two major classes of adaptive applications, under five competing programming methodologies and four leading parallel architectures. Results indicate that it is possible to achieve message-passing performance using shared-memory programming techniques by carefully following the same high level strategies. Adaptive applications have computational work loads and communication patterns which change unpredictably at runtime, requiring dynamic load balancing to achieve scalable performance on parallel machines. Efficient parallel implementations of such adaptive applications are therefore a challenging task. This work examines the implementation of two typical adaptive applications, Dynamic Remeshing and N-Body, across various programming paradigms and architectural platforms. We compare several critical factors of the parallel code development, including performance, programmability, scalability, algorithmic development, and portability

  5. Parallel strategy for optimal learning in perceptrons

    International Nuclear Information System (INIS)

    Neirotti, J P

    2010-01-01

    We developed a parallel strategy for learning optimally specific realizable rules by perceptrons, in an online learning scenario. Our result is a generalization of the Caticha-Kinouchi (CK) algorithm developed for learning a perceptron with a synaptic vector drawn from a uniform distribution over the N-dimensional sphere, so called the typical case. Our method outperforms the CK algorithm in almost all possible situations, failing only in a denumerable set of cases. The algorithm is optimal in the sense that it saturates Bayesian bounds when it succeeds.

  6. Evolution of a minimal parallel programming model

    International Nuclear Information System (INIS)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-01-01

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generality and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.

  7. Kinetic-Monte-Carlo-Based Parallel Evolution Simulation Algorithm of Dust Particles

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available The evolution simulation of dust particles provides an important way to analyze the impact of dust on the environment. KMC-based parallel algorithm is proposed to simulate the evolution of dust particles. In the parallel evolution simulation algorithm of dust particles, data distribution way and communication optimizing strategy are raised to balance the load of every process and reduce the communication expense among processes. The experimental results show that the simulation of diffusion, sediment, and resuspension of dust particles in virtual campus is realized and the simulation time is shortened by parallel algorithm, which makes up for the shortage of serial computing and makes the simulation of large-scale virtual environment possible.

  8. Machine learning for evolution strategies

    CERN Document Server

    Kramer, Oliver

    2016-01-01

    This book introduces numerous algorithmic hybridizations between both worlds that show how machine learning can improve and support evolution strategies. The set of methods comprises covariance matrix estimation, meta-modeling of fitness and constraint functions, dimensionality reduction for search and visualization of high-dimensional optimization processes, and clustering-based niching. After giving an introduction to evolution strategies and machine learning, the book builds the bridge between both worlds with an algorithmic and experimental perspective. Experiments mostly employ a (1+1)-ES and are implemented in Python using the machine learning library scikit-learn. The examples are conducted on typical benchmark problems illustrating algorithmic concepts and their experimental behavior. The book closes with a discussion of related lines of research.

  9. Efficient Parallel Strategy Improvement for Parity Games

    OpenAIRE

    Fearnley, John

    2017-01-01

    We study strategy improvement algorithms for solving parity games. While these algorithms are known to solve parity games using a very small number of iterations, experimental studies have found that a high step complexity causes them to perform poorly in practice. In this paper we seek to address this situation. Every iteration of the algorithm must compute a best response, and while the standard way of doing this uses the Bellman-Ford algorithm, we give experimental results that show that o...

  10. Parallel Evolution of Sperm Hyper-Activation Ca2+ Channels.

    Science.gov (United States)

    Cooper, Jacob C; Phadnis, Nitin

    2017-07-01

    Sperm hyper-activation is a dramatic change in sperm behavior where mature sperm burst into a final sprint in the race to the egg. The mechanism of sperm hyper-activation in many metazoans, including humans, consists of a jolt of Ca2+ into the sperm flagellum via CatSper ion channels. Surprisingly, all nine CatSper genes have been independently lost in several animal lineages. In Drosophila, sperm hyper-activation is performed through the cooption of the polycystic kidney disease 2 (pkd2) Ca2+ channel. The parallels between CatSpers in primates and pkd2 in Drosophila provide a unique opportunity to examine the molecular evolution of the sperm hyper-activation machinery in two independent, nonhomologous calcium channels separated by > 500 million years of divergence. Here, we use a comprehensive phylogenomic approach to investigate the selective pressures on these sperm hyper-activation channels. First, we find that the entire CatSper complex evolves rapidly under recurrent positive selection in primates. Second, we find that pkd2 has parallel patterns of adaptive evolution in Drosophila. Third, we show that this adaptive evolution of pkd2 is driven by its role in sperm hyper-activation. These patterns of selection suggest that the evolution of the sperm hyper-activation machinery is driven by sexual conflict with antagonistic ligands that modulate channel activity. Together, our results add sperm hyper-activation channels to the class of fast evolving reproductive proteins and provide insights into the mechanisms used by the sexes to manipulate sperm behavior. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  11. Effective Strategies for Teaching Evolution: The Primary Evolution Project

    Science.gov (United States)

    Hatcher, Chris

    2015-01-01

    When Chris Hatcher joined the Primary Evolution Project team at the University of Reading, his goal was to find effective strategies to teach evolution in a way that keeps children engaged and enthused. Hatcher has collaborated with colleagues at the University's Institute of Education to break the evolution unit down into distinct topics and…

  12. From evolution theory to parallel and distributed genetic

    CERN Multimedia

    CERN. Geneva

    2007-01-01

    Lecture #1: From Evolution Theory to Evolutionary Computation. Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems, which are based to some degree on the evolution of biological life in the natural world. In this tutorial we will review the source of inspiration for this metaheuristic and its capability for solving problems. We will show the main flavours within the field, and different problems that have been successfully solved employing this kind of techniques. Lecture #2: Parallel and Distributed Genetic Programming. The successful application of Genetic Programming (GP, one of the available Evolutionary Algorithms) to optimization problems has encouraged an increasing number of researchers to apply these techniques to a large set of problems. Given the difficulty of some problems, much effort has been applied to improving the efficiency of GP during the last few years. Among the available proposals,...

  13. Strategy intervention for the evolution of fairness.

    Directory of Open Access Journals (Sweden)

    Yanling Zhang

    Full Text Available The 'irrational' preference for fairness has attracted increasing attention. Although previous studies have focused on the effects of spitefulness on the evolution of fairness, they did not consider non-monotonic rejections shown in behavioral experiments. In this paper, we introduce a non-monotonic rejection in an evolutionary model of the Ultimatum Game. We propose strategy intervention to study the evolution of fairness in general structured populations. By sequentially adding five strategies into the competition between a fair strategy and a selfish strategy, we arrive at the following conclusions. First, the evolution of fairness is inhibited by altruism, but it is promoted by spitefulness. Second, the non-monotonic rejection helps fairness overcome selfishness. Particularly for group-structured populations, we analytically investigate how fairness, selfishness, altruism, and spitefulness are affected by population size, mutation, and migration in the competition among seven strategies. Our results may provide important insights into understanding the evolutionary origin of fairness.

  14. Rapid parallel evolution overcomes global honey bee parasite.

    Science.gov (United States)

    Oddie, Melissa; Büchler, Ralph; Dahle, Bjørn; Kovacic, Marin; Le Conte, Yves; Locke, Barbara; de Miranda, Joachim R; Mondet, Fanny; Neumann, Peter

    2018-05-16

    In eusocial insect colonies nestmates cooperate to combat parasites, a trait called social immunity. However, social immunity failed for Western honey bees (Apis mellifera) when the ectoparasitic mite Varroa destructor switched hosts from Eastern honey bees (Apis cerana). This mite has since become the most severe threat to A. mellifera world-wide. Despite this, some isolated A. mellifera populations are known to survive infestations by means of natural selection, largely by supressing mite reproduction, but the underlying mechanisms of this are poorly understood. Here, we show that a cost-effective social immunity mechanism has evolved rapidly and independently in four naturally V. destructor-surviving A. mellifera populations. Worker bees of all four 'surviving' populations uncapped/recapped worker brood cells more frequently and targeted mite-infested cells more effectively than workers in local susceptible colonies. Direct experiments confirmed the ability of uncapping/recapping to reduce mite reproductive success without sacrificing nestmates. Our results provide striking evidence that honey bees can overcome exotic parasites with simple qualitative and quantitative adaptive shifts in behaviour. Due to rapid, parallel evolution in four host populations this appears to be a key mechanism explaining survival of mite infested colonies.

  15. Identification of Novel Betaherpesviruses in Iberian Bats Reveals Parallel Evolution.

    Directory of Open Access Journals (Sweden)

    Francisco Pozo

    Full Text Available A thorough search for bat herpesviruses was carried out in oropharyngeal samples taken from most of the bat species present in the Iberian Peninsula from the Vespertilionidae, Miniopteridae, Molossidae and Rhinolophidae families, in addition to a colony of captive fruit bats from the Pteropodidae family. By using two degenerate consensus PCR methods targeting two conserved genes, distinct and previously unrecognized bat-hosted herpesviruses were identified for the most of the tested species. All together a total of 42 potentially novel bat herpesviruses were partially characterized. Thirty-two of them were tentatively assigned to the Betaherpesvirinae subfamily while the remaining 10 were allocated into the Gammaherpesvirinae subfamily. Significant diversity was observed among the novel sequences when compared with type herpesvirus species of the ICTV-approved genera. The inferred phylogenetic relationships showed that most of the betaherpesviruses sequences fell into a well-supported unique monophyletic clade and support the recognition of a new betaherpesvirus genus. This clade is subdivided into three major clades, corresponding to the families of bats studied. This supports the hypothesis of a species-specific parallel evolution process between the potentially new betaherpesviruses and their bat hosts. Interestingly, two of the betaherpesviruses' sequences detected in rhinolophid bats clustered together apart from the rest, closely related to viruses that belong to the Roseolovirus genus. This suggests a putative third roseolo lineage. On the contrary, no phylogenetic structure was detected among several potentially novel bat-hosted gammaherpesviruses found in the study. Remarkably, all of the possible novel bat herpesviruses described in this study are linked to a unique bat species.

  16. Identification of Novel Betaherpesviruses in Iberian Bats Reveals Parallel Evolution.

    Science.gov (United States)

    Pozo, Francisco; Juste, Javier; Vázquez-Morón, Sonia; Aznar-López, Carolina; Ibáñez, Carlos; Garin, Inazio; Aihartza, Joxerra; Casas, Inmaculada; Tenorio, Antonio; Echevarría, Juan Emilio

    2016-01-01

    A thorough search for bat herpesviruses was carried out in oropharyngeal samples taken from most of the bat species present in the Iberian Peninsula from the Vespertilionidae, Miniopteridae, Molossidae and Rhinolophidae families, in addition to a colony of captive fruit bats from the Pteropodidae family. By using two degenerate consensus PCR methods targeting two conserved genes, distinct and previously unrecognized bat-hosted herpesviruses were identified for the most of the tested species. All together a total of 42 potentially novel bat herpesviruses were partially characterized. Thirty-two of them were tentatively assigned to the Betaherpesvirinae subfamily while the remaining 10 were allocated into the Gammaherpesvirinae subfamily. Significant diversity was observed among the novel sequences when compared with type herpesvirus species of the ICTV-approved genera. The inferred phylogenetic relationships showed that most of the betaherpesviruses sequences fell into a well-supported unique monophyletic clade and support the recognition of a new betaherpesvirus genus. This clade is subdivided into three major clades, corresponding to the families of bats studied. This supports the hypothesis of a species-specific parallel evolution process between the potentially new betaherpesviruses and their bat hosts. Interestingly, two of the betaherpesviruses' sequences detected in rhinolophid bats clustered together apart from the rest, closely related to viruses that belong to the Roseolovirus genus. This suggests a putative third roseolo lineage. On the contrary, no phylogenetic structure was detected among several potentially novel bat-hosted gammaherpesviruses found in the study. Remarkably, all of the possible novel bat herpesviruses described in this study are linked to a unique bat species.

  17. Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study

    Directory of Open Access Journals (Sweden)

    Hari Radhakrishnan

    2015-01-01

    Full Text Available This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were done using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.

  18. Molecular bases for parallel evolution of translucent bracts in an alpine "glasshouse" plant Rheum alexandrae (Polygonaceae)

    Czech Academy of Sciences Publication Activity Database

    Liu, B. B.; Opgenoorth, L.; Miehe, G.; Zhang, D.-Y.; Wan, D.-S.; Zhao, C.-M.; Jia, Dong-Rui; Liu, J.-Q.

    2013-01-01

    Roč. 51, č. 2 (2013), s. 134-141 ISSN 1674-4918 Institutional support: RVO:67985939 Keywords : cDNA-AFLPs * parallel evolution * adaptations, mutations, diversity Subject RIV: EF - Botanics Impact factor: 1.648, year: 2013

  19. Mixed integer evolution strategies for parameter optimization.

    Science.gov (United States)

    Li, Rui; Emmerich, Michael T M; Eggermont, Jeroen; Bäck, Thomas; Schütz, M; Dijkstra, J; Reiber, J H C

    2013-01-01

    Evolution strategies (ESs) are powerful probabilistic search and optimization algorithms gleaned from biological evolution theory. They have been successfully applied to a wide range of real world applications. The modern ESs are mainly designed for solving continuous parameter optimization problems. Their ability to adapt the parameters of the multivariate normal distribution used for mutation during the optimization run makes them well suited for this domain. In this article we describe and study mixed integer evolution strategies (MIES), which are natural extensions of ES for mixed integer optimization problems. MIES can deal with parameter vectors consisting not only of continuous variables but also with nominal discrete and integer variables. Following the design principles of the canonical evolution strategies, they use specialized mutation operators tailored for the aforementioned mixed parameter classes. For each type of variable, the choice of mutation operators is governed by a natural metric for this variable type, maximal entropy, and symmetry considerations. All distributions used for mutation can be controlled in their shape by means of scaling parameters, allowing self-adaptation to be implemented. After introducing and motivating the conceptual design of the MIES, we study the optimality of the self-adaptation of step sizes and mutation rates on a generalized (weighted) sphere model. Moreover, we prove global convergence of the MIES on a very general class of problems. The remainder of the article is devoted to performance studies on artificial landscapes (barrier functions and mixed integer NK landscapes), and a case study in the optimization of medical image analysis systems. In addition, we show that with proper constraint handling techniques, MIES can also be applied to classical mixed integer nonlinear programming problems.

  20. Inertia in strategy switching transforms the strategy evolution.

    Science.gov (United States)

    Zhang, Yanling; Fu, Feng; Wu, Te; Xie, Guangming; Wang, Long

    2011-12-01

    A recent experimental study [Traulsen et al., Proc. Natl. Acad. Sci. 107, 2962 (2010)] shows that human strategy updating involves both direct payoff comparison and the cost of switching strategy, which is equivalent to inertia. However, it remains largely unclear how such a predisposed inertia affects 2 × 2 games in a well-mixed population of finite size. To address this issue, the "inertia bonus" (strategy switching cost) is added to the learner payoff in the Fermi process. We find how inertia quantitatively shapes the stationary distribution and that stochastic stability under inertia exhibits three regimes, with each covering seven regions in the plane spanned by two inertia parameters. We also obtain the extended "1/3" rule with inertia and the speed criterion with inertia; these two findings hold for a population above two. We illustrate the above results in the framework of the Prisoner's Dilemma game. As inertia varies, two intriguing stationary distributions emerge: the probability of coexistence state is maximized, or those of two full states are simultaneously peaked. Our results may provide useful insights into how the inertia of changing status quo acts on the strategy evolution and, in particular, the evolution of cooperation.

  1. Blackboxing: social learning strategies and cultural evolution.

    Science.gov (United States)

    Heyes, Cecilia

    2016-05-05

    Social learning strategies (SLSs) enable humans, non-human animals, and artificial agents to make adaptive decisions aboutwhenthey should copy other agents, andwhothey should copy. Behavioural ecologists and economists have discovered an impressive range of SLSs, and explored their likely impact on behavioural efficiency and reproductive fitness while using the 'phenotypic gambit'; ignoring, or remaining deliberately agnostic about, the nature and origins of the cognitive processes that implement SLSs. Here I argue that this 'blackboxing' of SLSs is no longer a viable scientific strategy. It has contributed, through the 'social learning strategies tournament', to the premature conclusion that social learning is generally better than asocial learning, and to a deep puzzle about the relationship between SLSs and cultural evolution. The puzzle can be solved by recognizing that whereas most SLSs are 'planetary'--they depend on domain-general cognitive processes--some SLSs, found only in humans, are 'cook-like'--they depend on explicit, metacognitive rules, such ascopy digital natives. These metacognitive SLSs contribute to cultural evolution by fostering the development of processes that enhance the exclusivity, specificity, and accuracy of social learning. © 2016 The Author(s).

  2. Blackboxing: social learning strategies and cultural evolution

    Science.gov (United States)

    Heyes, Cecilia

    2016-01-01

    Social learning strategies (SLSs) enable humans, non-human animals, and artificial agents to make adaptive decisions about when they should copy other agents, and who they should copy. Behavioural ecologists and economists have discovered an impressive range of SLSs, and explored their likely impact on behavioural efficiency and reproductive fitness while using the ‘phenotypic gambit’; ignoring, or remaining deliberately agnostic about, the nature and origins of the cognitive processes that implement SLSs. Here I argue that this ‘blackboxing' of SLSs is no longer a viable scientific strategy. It has contributed, through the ‘social learning strategies tournament', to the premature conclusion that social learning is generally better than asocial learning, and to a deep puzzle about the relationship between SLSs and cultural evolution. The puzzle can be solved by recognizing that whereas most SLSs are ‘planetary'—they depend on domain-general cognitive processes—some SLSs, found only in humans, are ‘cook-like'—they depend on explicit, metacognitive rules, such as copy digital natives. These metacognitive SLSs contribute to cultural evolution by fostering the development of processes that enhance the exclusivity, specificity, and accuracy of social learning. PMID:27069046

  3. Parallel vs. Convergent Evolution in Domestication and Diversification of Crops in the Americas

    Directory of Open Access Journals (Sweden)

    Barbara Pickersgill

    2018-05-01

    Full Text Available Domestication involves changes in various traits of the phenotype in response to human selection. Diversification may accompany or follow domestication, and results in variants within the crop adapted to different uses by humans or different agronomic conditions. Similar domestication and diversification traits may be shared by closely related species (parallel evolution or by distantly related species (convergent evolution. Many of these traits are produced by complex genetic networks or long biosynthetic pathways that are extensively conserved even in distantly related species. Similar phenotypic changes in different species may be controlled by homologous genes (parallel evolution at the genetic level or non-homologous genes (convergent evolution at the genetic level. It has been suggested that parallel evolution may be more frequent among closely related species, or among diversification rather than domestication traits, or among traits produced by simple metabolic pathways. Crops domesticated in the Americas span a spectrum of genetic relatedness, have been domesticated for diverse purposes, and have responded to human selection by changes in many different traits, so provide examples of both parallel and convergent evolution at various levels. However, despite the current explosion in relevant information, data are still insufficient to provide quantitative or conclusive assessments of the relative roles of these two processes in domestication and diversification

  4. Comparison of some parallelization strategies of thermalhydraulic codes on GPUs

    International Nuclear Information System (INIS)

    Jendoubi, T.; Bergeaud, V.; Geay, A.

    2013-01-01

    Modern supercomputers architecture is now often based on hybrid concepts combining parallelism to distributed memory, parallelism to shared memory and also to GPUs (Graphic Process Units). In this work, we propose a new approach to take advantage of these graphic cards in thermohydraulics algorithms. (authors)

  5. Parallel Evolution of Genes and Languages in the Caucasus Region

    Science.gov (United States)

    Balanovsky, Oleg; Dibirova, Khadizhat; Dybo, Anna; Mudrak, Oleg; Frolova, Svetlana; Pocheshkhova, Elvira; Haber, Marc; Platt, Daniel; Schurr, Theodore; Haak, Wolfgang; Kuznetsova, Marina; Radzhabov, Magomed; Balaganskaya, Olga; Romanov, Alexey; Zakharova, Tatiana; Soria Hernanz, David F.; Zalloua, Pierre; Koshel, Sergey; Ruhlen, Merritt; Renfrew, Colin; Wells, R. Spencer; Tyler-Smith, Chris; Balanovska, Elena

    2012-01-01

    We analyzed 40 SNP and 19 STR Y-chromosomal markers in a large sample of 1,525 indigenous individuals from 14 populations in the Caucasus and 254 additional individuals representing potential source populations. We also employed a lexicostatistical approach to reconstruct the history of the languages of the North Caucasian family spoken by the Caucasus populations. We found a different major haplogroup to be prevalent in each of four sets of populations that occupy distinct geographic regions and belong to different linguistic branches. The haplogroup frequencies correlated with geography and, even more strongly, with language. Within haplogroups, a number of haplotype clusters were shown to be specific to individual populations and languages. The data suggested a direct origin of Caucasus male lineages from the Near East, followed by high levels of isolation, differentiation and genetic drift in situ. Comparison of genetic and linguistic reconstructions covering the last few millennia showed striking correspondences between the topology and dates of the respective gene and language trees, and with documented historical events. Overall, in the Caucasus region, unmatched levels of gene-language co-evolution occurred within geographically isolated populations, probably due to its mountainous terrain. PMID:21571925

  6. Mixed-time parallel evolution in multiple quantum NMR experiments: sensitivity and resolution enhancement in heteronuclear NMR

    International Nuclear Information System (INIS)

    Ying Jinfa; Chill, Jordan H.; Louis, John M.; Bax, Ad

    2007-01-01

    A new strategy is demonstrated that simultaneously enhances sensitivity and resolution in three- or higher-dimensional heteronuclear multiple quantum NMR experiments. The approach, referred to as mixed-time parallel evolution (MT-PARE), utilizes evolution of chemical shifts of the spins participating in the multiple quantum coherence in parallel, thereby reducing signal losses relative to sequential evolution. The signal in a given PARE dimension, t 1 , is of a non-decaying constant-time nature for a duration that depends on the length of t 2 , and vice versa, prior to the onset of conventional exponential decay. Line shape simulations for the 1 H- 15 N PARE indicate that this strategy significantly enhances both sensitivity and resolution in the indirect 1 H dimension, and that the unusual signal decay profile results in acceptable line shapes. Incorporation of the MT-PARE approach into a 3D HMQC-NOESY experiment for measurement of H N -H N NOEs in KcsA in SDS micelles at 50 o C was found to increase the experimental sensitivity by a factor of 1.7±0.3 with a concomitant resolution increase in the indirectly detected 1 H dimension. The method is also demonstrated for a situation in which homonuclear 13 C- 13 C decoupling is required while measuring weak H3'-2'OH NOEs in an RNA oligomer

  7. Bacteria vs. bacteriophages: parallel evolution of immune arsenals

    Directory of Open Access Journals (Sweden)

    Muhammad Abu Bakr Shabbir

    2016-08-01

    Full Text Available Bacteriophages are the most common entities on earth and represent a constant challenge to bacterial populations. To fend off bacteriophage infection, bacteria evolved immune systems to avert phage adsorption and block invader DNA entry. They developed restriction-modification systems and mechanisms to abort infection and interfere with virion assembly, as well as newly recognized clustered regularly interspaced short palindromic repeats (CRISPR. In response to bacterial immune systems, bacteriophages synchronously evolved resistance mechanisms, such as the anti-CRISPR systems to counterattack bacterial CRISPR-cas systems, in a continuing evolutionary arms race between virus and host. In turn, it is fundamental to the survival of the bacterial cell to evolve a system to combat bacteriophage immune strategies.

  8. Evolution Strategies in the Multipoint Connections Routing

    Directory of Open Access Journals (Sweden)

    L. Krulikovska

    2010-09-01

    Full Text Available Routing of multipoint connections plays an important role in final cost and quality of a found connection. New algorithms with better results are still searched. In this paper, a possibility of using the evolution strategies (ES for routing is presented. Quality of found connection is evaluated from the view of final cost and time spent on a searching procedure. First, parametrical analysis of results of the ES are discussed and compared with the Prim’s algorithm, which was chosen as a representative of the deterministic routing algorithms. Second, ways for improving the ES are suggested and implemented. The obtained results are reviewed. The main improvements are specified and discussed in conclusion.

  9. Efficient receiver tuning using differential evolution strategies

    Science.gov (United States)

    Wheeler, Caleb H.; Toland, Trevor G.

    2016-08-01

    Differential evolution (DE) is a powerful and computationally inexpensive optimization strategy that can be used to search an entire parameter space or to converge quickly on a solution. The Kilopixel Array Pathfinder Project (KAPPa) is a heterodyne receiver system delivering 5 GHz of instantaneous bandwidth in the tuning range of 645-695 GHz. The fully automated KAPPa receiver test system finds optimal receiver tuning using performance feedback and DE. We present an adaptation of DE for use in rapid receiver characterization. The KAPPa DE algorithm is written in Python 2.7 and is fully integrated with the KAPPa instrument control, data processing, and visualization code. KAPPa develops the technologies needed to realize heterodyne focal plane arrays containing 1000 pixels. Finding optimal receiver tuning by investigating large parameter spaces is one of many challenges facing the characterization phase of KAPPa. This is a difficult task via by-hand techniques. Characterizing or tuning in an automated fashion without need for human intervention is desirable for future large scale arrays. While many optimization strategies exist, DE is ideal for time and performance constraints because it can be set to converge to a solution rapidly with minimal computational overhead. We discuss how DE is utilized in the KAPPa system and discuss its performance and look toward the future of 1000 pixel array receivers and consider how the KAPPa DE system might be applied.

  10. Academic training: From Evolution Theory to Parallel and Distributed Genetic Programming

    CERN Multimedia

    2007-01-01

    2006-2007 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 15, 16 March From 11:00 to 12:00 - Main Auditorium, bldg. 500 From Evolution Theory to Parallel and Distributed Genetic Programming F. FERNANDEZ DE VEGA / Univ. of Extremadura, SP Lecture No. 1: From Evolution Theory to Evolutionary Computation Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems, which are based to some degree on the evolution of biological life in the natural world. In this tutorial we will review the source of inspiration for this metaheuristic and its capability for solving problems. We will show the main flavours within the field, and different problems that have been successfully solved employing this kind of techniques. Lecture No. 2: Parallel and Distributed Genetic Programming The successful application of Genetic Programming (GP, one of the available Evolutionary Algorithms) to optimization problems has encouraged an ...

  11. Parallel electric fields in a simulation of magnetotail reconnection and plasmoid evolution

    International Nuclear Information System (INIS)

    Hesse, M.; Birn, J.

    1990-01-01

    Properties of the electric field component parallel to the magnetic field are investigate in a 3D MHD simulation of plasmoid formation and evolution in the magnetotail, in the presence of a net dawn-dusk magnetic field component. The spatial localization of E-parallel, and the concept of a diffusion zone and the role of E-parallel in accelerating electrons are discussed. A localization of the region of enhanced E-parallel in all space directions is found, with a strong concentration in the z direction. This region is identified as the diffusion zone, which plays a crucial role in reconnection theory through the local break-down of magnetic flux conservation. 12 refs

  12. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  13. Molecular pathways to parallel evolution: I. Gene nexuses and their morphological correlates.

    Science.gov (United States)

    Zuckerkandl, E

    1994-12-01

    Aspects of the regulatory interactions among genes are probably as old as most genes are themselves. Correspondingly, similar predispositions to changes in such interactions must have existed for long evolutionary periods. Features of the structure and the evolution of the system of gene regulation furnish the background necessary for a molecular understanding of parallel evolution. Patently "unrelated" organs, such as the fat body of a fly and the liver of a mammal, can exhibit fractional homology, a fraction expected to become subject to quantitation. This also seems to hold for different organs in the same organism, such as wings and legs of a fly. In informational macromolecules, on the other hand, homology is indeed all or none. In the quite different case of organs, analogy is expected usually to represent attenuated homology. Many instances of putative convergence are likely to turn out to be predominantly parallel evolution, presumably including the case of the vertebrate and cephalopod eyes. Homology in morphological features reflects a similarity in networks of active genes. Similar nexuses of active genes can be established in cells of different embryological origins. Thus, parallel development can be considered a counterpart to parallel evolution. Specific macromolecular interactions leading to the regulation of the c-fos gene are given as an example of a "controller node" defined as a regulatory unit. Quantitative changes in gene control are distinguished from relational changes, and frequent parallelism in quantitative changes is noted in Drosophila enzymes. Evolutionary reversions in quantitative gene expression are also expected. The evolution of relational patterns is attributed to several distinct mechanisms, notably the shuffling of protein domains. The growth of such patterns may in part be brought about by a particular process of compensation for "controller gene diseases," a process that would spontaneously tend to lead to increased regulatory

  14. Parallel Note-Taking: A Strategy for Effective Use of Webnotes

    Science.gov (United States)

    Pardini, Eleanor A.; Domizi, Denise P.; Forbes, Daniel A.; Pettis, Gretchen V.

    2005-01-01

    Many instructors supply online lecture notes but little attention has been given to how students can make the best use of this resource. Based on observations of student difficulties with these notes, a strategy called parallel note-taking was developed for using online notes. The strategy is a hybrid of research-proven strategies for effective…

  15. Stochastic resonance and the evolution of Daphnia foraging strategy

    International Nuclear Information System (INIS)

    Dees, Nathan D; Bahar, Sonya; Moss, Frank

    2008-01-01

    Search strategies are currently of great interest, with reports on foraging ranging from albatrosses and spider monkeys to microzooplankton. Here, we investigate the role of noise in optimizing search strategies. We focus on the zooplankton Daphnia, which move in successive sequences consisting of a hop, a pause and a turn through an angle. Recent experiments have shown that their turning angle distributions (TADs) and underlying noise intensities are similar across species and age groups, suggesting an evolutionary origin of this internal noise. We explore this hypothesis further with a digital simulation (EVO) based solely on the three central Darwinian themes: inheritability, variability and survivability. Separate simulations utilizing stochastic resonance (SR) indicate that foraging success, and hence fitness, is maximized at an optimum TAD noise intensity, which is represented by the distribution's characteristic width, σ. In both the EVO and SR simulations, foraging success is the criterion, and the results are the predicted characteristic widths of the TADs that maximize success. Our results are twofold: (1) the evolving characteristic widths achieve stasis after many generations; (2) as a hop length parameter is changed, variations in the evolved widths generated by EVO parallel those predicted by SR. These findings provide support for the hypotheses that (1) σ is an evolved quantity and that (2) SR plays a role in evolution. (communication)

  16. Development of a parallelization strategy for the VARIANT code

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Khalil, H.S.; Palmiotti, G.; Tatsumi, M.

    1996-01-01

    The VARIANT code solves the multigroup steady-state neutron diffusion and transport equation in three-dimensional Cartesian and hexagonal geometries using the variational nodal method. VARIANT consists of four major parts that must be executed sequentially: input handling, calculation of response matrices, solution algorithm (i.e. inner-outer iteration), and output of results. The objective of the parallelization effort was to reduce the overall computing time by distributing the work of the two computationally intensive (sequential) tasks, the coupling coefficient calculation and the iterative solver, equally among a group of processors. This report describes the code's calculations and gives performance results on one of the benchmark problems used to test the code. The performance analysis in the IBM SPx system shows good efficiency for well-load-balanced programs. Even for relatively small problem sizes, respectable efficiencies are seen for the SPx. An extension to achieve a higher degree of parallelism will be addressed in future work. 7 refs., 1 tab

  17. Pursuing Darwin's curious parallel: Prospects for a science of cultural evolution.

    Science.gov (United States)

    Mesoudi, Alex

    2017-07-24

    In the past few decades, scholars from several disciplines have pursued the curious parallel noted by Darwin between the genetic evolution of species and the cultural evolution of beliefs, skills, knowledge, languages, institutions, and other forms of socially transmitted information. Here, I review current progress in the pursuit of an evolutionary science of culture that is grounded in both biological and evolutionary theory, but also treats culture as more than a proximate mechanism that is directly controlled by genes. Both genetic and cultural evolution can be described as systems of inherited variation that change over time in response to processes such as selection, migration, and drift. Appropriate differences between genetic and cultural change are taken seriously, such as the possibility in the latter of nonrandomly guided variation or transformation, blending inheritance, and one-to-many transmission. The foundation of cultural evolution was laid in the late 20th century with population-genetic style models of cultural microevolution, and the use of phylogenetic methods to reconstruct cultural macroevolution. Since then, there have been major efforts to understand the sociocognitive mechanisms underlying cumulative cultural evolution, the consequences of demography on cultural evolution, the empirical validity of assumed social learning biases, the relative role of transformative and selective processes, and the use of quantitative phylogenetic and multilevel selection models to understand past and present dynamics of society-level change. I conclude by highlighting the interdisciplinary challenges of studying cultural evolution, including its relation to the traditional social sciences and humanities.

  18. Pursuing Darwin’s curious parallel: Prospects for a science of cultural evolution

    Science.gov (United States)

    2017-01-01

    In the past few decades, scholars from several disciplines have pursued the curious parallel noted by Darwin between the genetic evolution of species and the cultural evolution of beliefs, skills, knowledge, languages, institutions, and other forms of socially transmitted information. Here, I review current progress in the pursuit of an evolutionary science of culture that is grounded in both biological and evolutionary theory, but also treats culture as more than a proximate mechanism that is directly controlled by genes. Both genetic and cultural evolution can be described as systems of inherited variation that change over time in response to processes such as selection, migration, and drift. Appropriate differences between genetic and cultural change are taken seriously, such as the possibility in the latter of nonrandomly guided variation or transformation, blending inheritance, and one-to-many transmission. The foundation of cultural evolution was laid in the late 20th century with population-genetic style models of cultural microevolution, and the use of phylogenetic methods to reconstruct cultural macroevolution. Since then, there have been major efforts to understand the sociocognitive mechanisms underlying cumulative cultural evolution, the consequences of demography on cultural evolution, the empirical validity of assumed social learning biases, the relative role of transformative and selective processes, and the use of quantitative phylogenetic and multilevel selection models to understand past and present dynamics of society-level change. I conclude by highlighting the interdisciplinary challenges of studying cultural evolution, including its relation to the traditional social sciences and humanities. PMID:28739929

  19. The role of Bh4 in parallel evolution of hull colour in domesticated and weedy rice.

    Science.gov (United States)

    Vigueira, C C; Li, W; Olsen, K M

    2013-08-01

    The two independent domestication events in the genus Oryza that led to African and Asian rice offer an extremely useful system for studying the genetic basis of parallel evolution. This system is also characterized by parallel de-domestication events, with two genetically distinct weedy rice biotypes in the US derived from the Asian domesticate. One important trait that has been altered by rice domestication and de-domestication is hull colour. The wild progenitors of the two cultivated rice species have predominantly black-coloured hulls, as does one of the two U.S. weed biotypes; both cultivated species and one of the US weedy biotypes are characterized by straw-coloured hulls. Using Black hull 4 (Bh4) as a hull colour candidate gene, we examined DNA sequence variation at this locus to study the parallel evolution of hull colour variation in the domesticated and weedy rice system. We find that independent Bh4-coding mutations have arisen in African and Asian rice that are correlated with the straw hull phenotype, suggesting that the same gene is responsible for parallel trait evolution. For the U.S. weeds, Bh4 haplotype sequences support current hypotheses on the phylogenetic relationship between the two biotypes and domesticated Asian rice; straw hull weeds are most similar to indica crops, and black hull weeds are most similar to aus crops. Tests for selection indicate that Asian crops and straw hull weeds deviate from neutrality at this gene, suggesting possible selection on Bh4 during both rice domestication and de-domestication. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.

  20. Evolution of strategies for modern rechargeable batteries.

    Science.gov (United States)

    Goodenough, John B

    2013-05-21

    This Account provides perspective on the evolution of the rechargeable battery and summarizes innovations in the development of these devices. Initially, I describe the components of a conventional rechargeable battery along with the engineering parameters that define the figures of merit for a single cell. In 1967, researchers discovered fast Na(+) conduction at 300 K in Na β,β''-alumina. Since then battery technology has evolved from a strongly acidic or alkaline aqueous electrolyte with protons as the working ion to an organic liquid-carbonate electrolyte with Li(+) as the working ion in a Li-ion battery. The invention of the sodium-sulfur and Zebra batteries stimulated consideration of framework structures as crystalline hosts for mobile guest alkali ions, and the jump in oil prices in the early 1970s prompted researchers to consider alternative room-temperature batteries with aprotic liquid electrolytes. With the existence of Li primary cells and ongoing research on the chemistry of reversible Li intercalation into layered chalcogenides, industry invested in the production of a Li/TiS2 rechargeable cell. However, on repeated recharge, dendrites grew across the electrolyte from the anode to the cathode, leading to dangerous short-circuits in the cell in the presence of the flammable organic liquid electrolyte. Because lowering the voltage of the anode would prevent cells with layered-chalcogenide cathodes from competing with cells that had an aqueous electrolyte, researchers quickly abandoned this effort. However, once it was realized that an oxide cathode could offer a larger voltage versus lithium, researchers considered the extraction of Li from the layered LiMO2 oxides with M = Co or Ni. These oxide cathodes were fabricated in a discharged state, and battery manufacturers could not conceive of assembling a cell with a discharged cathode. Meanwhile, exploration of Li intercalation into graphite showed that reversible Li insertion into carbon occurred

  1. Parallel processes: using motivational interviewing as an implementation coaching strategy.

    Science.gov (United States)

    Hettema, Jennifer E; Ernst, Denise; Williams, Jessica Roberts; Miller, Kristin J

    2014-07-01

    In addition to its clinical efficacy as a communication style for strengthening motivation and commitment to change, motivational interviewing (MI) has been hypothesized to be a potential tool for facilitating evidence-based practice adoption decisions. This paper reports on the rationale and content of MI-based implementation coaching Webinars that, as part of a larger active dissemination strategy, were found to be more effective than passive dissemination strategies at promoting adoption decisions among behavioral health and health providers and administrators. The Motivational Interviewing Treatment Integrity scale (MITI 3.1.1) was used to rate coaching Webinars from 17 community behavioral health organizations and 17 community health centers. The MITI coding system was found to be applicable to the coaching Webinars, and raters achieved high levels of agreement on global and behavior count measurements of fidelity to MI. Results revealed that implementation coaches maintained fidelity to the MI model, exceeding competency benchmarks for almost all measures. Findings suggest that it is feasible to implement MI as a coaching tool.

  2. Research on Control Strategy of Complex Systems through VSC-HVDC Grid Parallel Device

    Directory of Open Access Journals (Sweden)

    Xue Mei-Juan

    2014-07-01

    Full Text Available After the completion of grid parallel, the device can turn to be UPFC, STATCOM, SSSC, research on the conversion circuit and transform method by corresponding switching operation. Accomplish the grid parallel and comprehensive control of the tie-line and stable operation and control functions of grid after parallel. Defines the function select operation switch matrix and grid parallel system branch variable, forming a switch matrix to achieve corresponding function of the composite system. Formed a criterion of the selection means to choose control strategy according to the switch matrix, to accomplish corresponding function. Put the grid parallel, STATCOM, SSSC and UPFC together as a system, improve the stable operation and flexible control of the power system.

  3. The strategy of parallel approaches in projects with unforeseeable uncertainty: the Manhattan case in retrospect

    OpenAIRE

    Sylvain Lenfle

    2011-01-01

    International audience; This paper discusses the literature on the management of projects with unforeseeable uncertainty. Recent work demonstrates that, when confronted with unforeseeable uncertainties, managers can adopt either a learning, trial-and-error-based strategy, or a parallel approach. In the latter, different solutions are developed in parallel and the best one is chosen when enough information becomes available. Studying the case of the Manhattan Project, which historically exempl...

  4. A Parallel Strategy for Convolutional Neural Network Based on Heterogeneous Cluster for Mobile Information System

    Directory of Open Access Journals (Sweden)

    Jilin Zhang

    2017-01-01

    Full Text Available With the development of the mobile systems, we gain a lot of benefits and convenience by leveraging mobile devices; at the same time, the information gathered by smartphones, such as location and environment, is also valuable for business to provide more intelligent services for customers. More and more machine learning methods have been used in the field of mobile information systems to study user behavior and classify usage patterns, especially convolutional neural network. With the increasing of model training parameters and data scale, the traditional single machine training method cannot meet the requirements of time complexity in practical application scenarios. The current training framework often uses simple data parallel or model parallel method to speed up the training process, which is why heterogeneous computing resources have not been fully utilized. To solve these problems, our paper proposes a delay synchronization convolutional neural network parallel strategy, which leverages the heterogeneous system. The strategy is based on both synchronous parallel and asynchronous parallel approaches; the model training process can reduce the dependence on the heterogeneous architecture in the premise of ensuring the model convergence, so the convolution neural network framework is more adaptive to different heterogeneous system environments. The experimental results show that the proposed delay synchronization strategy can achieve at least three times the speedup compared to the traditional data parallelism.

  5. Darwin's concepts in a test tube: parallels between organismal and in vitro evolution.

    Science.gov (United States)

    Díaz Arenas, Carolina; Lehman, Niles

    2009-02-01

    The evolutionary process as imagined by Darwin 150 years ago is evident not only in nature but also in the manner in which naked nucleic acids and proteins experience the "survival of the fittest" in the test tube during in vitro evolution. This review highlights some of the most apparent evolutionary patterns, such as directional selection, purifying selection, disruptive selection, and iterative evolution (recurrence), and draws parallels between what happens in the wild with whole organisms and what happens in the lab with molecules. Advances in molecular selection techniques, particularly with catalytic RNAs and DNAs, have accelerated in the last 20 years to the point where soon any sort of complex differential hereditary event that one can ascribe to natural populations will be observable in molecular populations, and exploitation of these events can even lead to practical applications in some cases.

  6. Parallel evolution of mound-building and grass-feeding in Australian nasute termites.

    Science.gov (United States)

    Arab, Daej A; Namyatova, Anna; Evans, Theodore A; Cameron, Stephen L; Yeates, David K; Ho, Simon Y W; Lo, Nathan

    2017-02-01

    Termite mounds built by representatives of the family Termitidae are among the most spectacular constructions in the animal kingdom, reaching 6-8 m in height and housing millions of individuals. Although functional aspects of these structures are well studied, their evolutionary origins remain poorly understood. Australian representatives of the termitid subfamily Nasutitermitinae display a wide variety of nesting habits, making them an ideal group for investigating the evolution of mound building. Because they feed on a variety of substrates, they also provide an opportunity to illuminate the evolution of termite diets. Here, we investigate the evolution of termitid mound building and diet, through a comprehensive molecular phylogenetic analysis of Australian Nasutitermitinae. Molecular dating analysis indicates that the subfamily has colonized Australia on three occasions over the past approximately 20 Myr. Ancestral-state reconstruction showed that mound building arose on multiple occasions and from diverse ancestral nesting habits, including arboreal and wood or soil nesting. Grass feeding appears to have evolved from wood feeding via ancestors that fed on both wood and leaf litter. Our results underscore the adaptability of termites to ancient environmental change, and provide novel examples of parallel evolution of extended phenotypes. © 2017 The Author(s).

  7. Iran's Sea Power Strategy: Goals and Evolution

    National Research Council Canada - National Science Library

    Walker, John

    1997-01-01

    This thesis examines the intent of Iran's sea power strategy using a multipart analysis including a historical review of the transition of Iran's naval power through the Iranian Revolution, Iran-Iraq...

  8. Optimal control applied to the control strategy of a parallel hybrid vehicle; Commande optimale appliquee a la strategie de commande d'un vehicule hybride parallele

    Energy Technology Data Exchange (ETDEWEB)

    Delprat, S.; Guerra, T.M. [Universite de Valenciennes et du Hainaut-Cambresis, LAMIH UMR CNRS 8530, 59 - Valenciennes (France); Rimaux, J. [PSA Peugeot Citroen, DRIA/SARA/EEES, 78 - Velizy Villacoublay (France); Paganelli, G. [Center for Automotive Research, Ohio (United States)

    2002-07-01

    Control strategies are algorithms that calculate the power repartition between the engine and the motor of an hybrid vehicle in order to minimize the fuel consumption and/or emissions. Some algorithms are devoted to real time application whereas others are designed for global optimization in stimulation. The last ones provide solutions which can be used to evaluate the performances of a given hybrid vehicle or a given real time control strategy. The control strategy problem is firstly written into the form of an optimization under constraints problem. A solution based on optimal control is proposed. Results are given for the European Normalized Cycle and a parallel single shaft hybrid vehicle built at the LAMIH (France). (authors)

  9. Input-Parallel Output-Parallel Three-Level DC/DC Converters With Interleaving Control Strategy for Minimizing and Balancing Capacitor Ripple Currents

    DEFF Research Database (Denmark)

    Liu, Dong; Deng, Fujin; Gong, Zheng

    2017-01-01

    In this paper, the input-parallel output-parallel (IPOP) three-level (TL) DC/DC converters associated with the interleaving control strategy are proposed for minimizing and balancing the capacitor ripple currents. The proposed converters consist of two four-switch half-bridge three-level (HBTL) DC...

  10. An effective approach to reducing strategy space for maintenance optimisation of multistate series–parallel systems

    International Nuclear Information System (INIS)

    Zhou, Yifan; Lin, Tian Ran; Sun, Yong; Bian, Yangqing; Ma, Lin

    2015-01-01

    Maintenance optimisation of series–parallel systems is a research topic of practical significance. Nevertheless, a cost-effective maintenance strategy is difficult to obtain due to the large strategy space for maintenance optimisation of such systems. The heuristic algorithm is often employed to deal with this problem. However, the solution obtained by the heuristic algorithm is not always the global optimum and the algorithm itself can be very time consuming. An alternative method based on linear programming is thus developed in this paper to overcome such difficulties by reducing strategy space of maintenance optimisation. A theoretical proof is provided in the paper to verify that the proposed method is at least as effective as the existing methods for strategy space reduction. Numerical examples for maintenance optimisation of series–parallel systems having multistate components and considering both economic dependence among components and multiple-level imperfect maintenance are also presented. The simulation results confirm that the proposed method is more effective than the existing methods in removing inappropriate maintenance strategies of multistate series–parallel systems. - Highlights: • A new method using linear programming is developed to reduce the strategy space. • The effectiveness of the new method for strategy reduction is theoretically proved. • Imperfect maintenance and economic dependence are considered during optimisation

  11. Parallel Evolution of Copy-Number Variation across Continents in Drosophila melanogaster

    Science.gov (United States)

    Schrider, Daniel R.; Hahn, Matthew W.; Begun, David J.

    2016-01-01

    Genetic differentiation across populations that is maintained in the presence of gene flow is a hallmark of spatially varying selection. In Drosophila melanogaster, the latitudinal clines across the eastern coasts of Australia and North America appear to be examples of this type of selection, with recent studies showing that a substantial portion of the D. melanogaster genome exhibits allele frequency differentiation with respect to latitude on both continents. As of yet there has been no genome-wide examination of differentiated copy-number variants (CNVs) in these geographic regions, despite their potential importance for phenotypic variation in Drosophila and other taxa. Here, we present an analysis of geographic variation in CNVs in D. melanogaster. We also present the first genomic analysis of geographic variation for copy-number variation in the sister species, D. simulans, in order to investigate patterns of parallel evolution in these close relatives. In D. melanogaster we find hundreds of CNVs, many of which show parallel patterns of geographic variation on both continents, lending support to the idea that they are influenced by spatially varying selection. These findings support the idea that polymorphic CNVs contribute to local adaptation in D. melanogaster. In contrast, we find very few CNVs in D. simulans that are geographically differentiated in parallel on both continents, consistent with earlier work suggesting that clinal patterns are weaker in this species. PMID:26809315

  12. The Evolution of Exploitation Strategies by Myrmecophiles

    DEFF Research Database (Denmark)

    Schär, Sämi

    than outside ant nests (M. rubra) as well. This fungus can kill ant associated lycaenid larvae, justifying the assumption that these benefit from the entomopathogen poor environment of ant nests. This could explain why natural selection may act in favour of this strategy. In the third chapter I......Myrmecophiles are animals which have evolved to live in the nests of ants. This life history strategy appears in animals as different as insects, spiders, snails, crustaceans and even snakes. Myrmecophiles are very speciose with estimates of up to 100'000 species, which raises the question why...... this strategy has evolved so frequently and is maintained by natural selection. The type of association between Myrmecophiles and ants ranges from mutualistic through to parasitic. These types of symbioses can also be found between and within species of ants. Ant associations can therefore be broadly...

  13. Parallel Evolution of Copy-Number Variation across Continents in Drosophila melanogaster.

    Science.gov (United States)

    Schrider, Daniel R; Hahn, Matthew W; Begun, David J

    2016-05-01

    Genetic differentiation across populations that is maintained in the presence of gene flow is a hallmark of spatially varying selection. In Drosophila melanogaster, the latitudinal clines across the eastern coasts of Australia and North America appear to be examples of this type of selection, with recent studies showing that a substantial portion of the D. melanogaster genome exhibits allele frequency differentiation with respect to latitude on both continents. As of yet there has been no genome-wide examination of differentiated copy-number variants (CNVs) in these geographic regions, despite their potential importance for phenotypic variation in Drosophila and other taxa. Here, we present an analysis of geographic variation in CNVs in D. melanogaster. We also present the first genomic analysis of geographic variation for copy-number variation in the sister species, D. simulans, in order to investigate patterns of parallel evolution in these close relatives. In D. melanogaster we find hundreds of CNVs, many of which show parallel patterns of geographic variation on both continents, lending support to the idea that they are influenced by spatially varying selection. These findings support the idea that polymorphic CNVs contribute to local adaptation in D. melanogaster In contrast, we find very few CNVs in D. simulans that are geographically differentiated in parallel on both continents, consistent with earlier work suggesting that clinal patterns are weaker in this species. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Reliability–redundancy allocation problem considering optimal redundancy strategy using parallel genetic algorithm

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    To maximize the reliability of a system, the traditional reliability–redundancy allocation problem (RRAP) determines the component reliability and level of redundancy for each subsystem. This paper proposes an advanced RRAP that also considers the optimal redundancy strategy, either active or cold standby. In addition, new examples are presented for it. Furthermore, the exact reliability function for a cold standby redundant subsystem with an imperfect detector/switch is suggested, and is expected to replace the previous approximating model that has been used in most related studies. A parallel genetic algorithm for solving the RRAP as a mixed-integer nonlinear programming model is presented, and its performance is compared with those of previous studies by using numerical examples on three benchmark problems. - Highlights: • Optimal strategy is proposed to solve reliability redundancy allocation problem. • The redundancy strategy uses parallel genetic algorithm. • Improved reliability function for a cold standby subsystem is suggested. • Proposed redundancy strategy enhances the system reliability.

  15. Torque Split Strategy for Parallel Hybrid Electric Vehicles with an Integrated Starter Generator

    OpenAIRE

    Fu, Zhumu; Gao, Aiyun; Wang, Xiaohong; Song, Xiaona

    2014-01-01

    This paper presents a torque split strategy for parallel hybrid electric vehicles with an integrated starter generator (ISG-PHEV) by using fuzzy logic control. By combining the efficiency map and the optimum torque curve of the internal combustion engine (ICE) with the state of charge (SOC) of the batteries, the torque split strategy is designed, which manages the ICE within its peak efficiency region. Taking the quantified ICE torque, the quantified SOC of the batteries, and the quantified I...

  16. A new virtual-flux-vector based droop control strategy for parallel connected inverters in microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Qu, Yanqing

    2013-01-01

    Voltage and frequency droop method is commonly used in microgrids to achieve proper autonomous power sharing without rely on intercommunication systems. This paper proposes a new control strategy for parallel connected inverters in microgrid applications by drooping the flux instead of the invert...

  17. Silencing, positive selection and parallel evolution: busy history of primate cytochromes C.

    Science.gov (United States)

    Pierron, Denis; Opazo, Juan C; Heiske, Margit; Papper, Zack; Uddin, Monica; Chand, Gopi; Wildman, Derek E; Romero, Roberto; Goodman, Morris; Grossman, Lawrence I

    2011-01-01

    Cytochrome c (cyt c) participates in two crucial cellular processes, energy production and apoptosis, and unsurprisingly is a highly conserved protein. However, previous studies have reported for the primate lineage (i) loss of the paralogous testis isoform, (ii) an acceleration and then a deceleration of the amino acid replacement rate of the cyt c somatic isoform, and (iii) atypical biochemical behavior of human cyt c. To gain insight into the cause of these major evolutionary events, we have retraced the history of cyt c loci among primates. For testis cyt c, all primate sequences examined carry the same nonsense mutation, which suggests that silencing occurred before the primates diversified. For somatic cyt c, maximum parsimony, maximum likelihood, and Bayesian phylogenetic analyses yielded the same tree topology. The evolutionary analyses show that a fast accumulation of non-synonymous mutations (suggesting positive selection) occurred specifically on the anthropoid lineage root and then continued in parallel on the early catarrhini and platyrrhini stems. Analysis of evolutionary changes using the 3D structure suggests they are focused on the respiratory chain rather than on apoptosis or other cyt c functions. In agreement with previous biochemical studies, our results suggest that silencing of the cyt c testis isoform could be linked with the decrease of primate reproduction rate. Finally, the evolution of cyt c in the two sister anthropoid groups leads us to propose that somatic cyt c evolution may be related both to COX evolution and to the convergent brain and body mass enlargement in these two anthropoid clades.

  18. NEW CO-EVOLUTION STRATEGIES OF THIRD MILLENNIUM; METHODOLOGICAL ASPECT

    Directory of Open Access Journals (Sweden)

    E. K. Bulygo

    2006-01-01

    Full Text Available The paper is devoted to an application of the co-evolution methodology to the social space. Principles of instability and non-linearity that are typical for contemporary natural science are used as a theoretical background of a new social methodology. Authors try to prove that the co-evolution strategy has a long pre-history in the ancient oriental philosophy and manifests itself in forms of modem culture

  19. New adaptive differencing strategy in the PENTRAN 3-d parallel Sn code

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Haghighat, A.

    1996-01-01

    It is known that three-dimensional (3-D) discrete ordinates (S n ) transport problems require an immense amount of storage and computational effort to solve. For this reason, parallel codes that offer a capability to completely decompose the angular, energy, and spatial domains among a distributed network of processors are required. One such code recently developed is PENTRAN, which iteratively solves 3-D multi-group, anisotropic S n problems on distributed-memory platforms, such as the IBM-SP2. Because large problems typically contain several different material zones with various properties, available differencing schemes should automatically adapt to the transport physics in each material zone. To minimize the memory and message-passing overhead required for massively parallel S n applications, available differencing schemes in an adaptive strategy should also offer reasonable accuracy and positivity, yet require only the zeroth spatial moment of the transport equation; differencing schemes based on higher spatial moments, in spite of their greater accuracy, require at least twice the amount of storage and communication cost for implementation in a massively parallel transport code. This paper discusses a new adaptive differencing strategy that uses increasingly accurate schemes with low parallel memory and communication overhead. This strategy, implemented in PENTRAN, includes a new scheme, exponential directional averaged (EDA) differencing

  20. Entrepreneurship ecosystem evolution strategy of Saudi Arabia

    Directory of Open Access Journals (Sweden)

    Muhammad Rahatullah Khan

    2016-10-01

    Full Text Available In majority of times when a potential start-up strikes a brilliant business idea, he/she has little knowledge of ‘how to move from there’. They lack information on the stakeholders of entrepreneurship ecosystem who can help and assist these startups in numerous ways and help them materialize their concepts. Availability of this information will help the ecosystem stakeholders to avoid replication and duplication of efforts. Similarly, knowledge of status quo helps identify opportunities and supports plan development to endeavor through right strategy for the start-up. Critical review of existing initiatives of Saudi Arabia for entrepreneurship growth and identification of the existing stakeholders of the entrepreneurship in the country is conducted. Similarly their work and potential for practicable interventions to further entrepreneurship reflecting country’s economic development process is examined. This paper benefits from a cross sectional basic study of Saudi Arabia that utilized primary and secondary sources to discover the initiatives, understand entrepreneurship growth and then map the national entrepreneurship ecosystem. A number of interviews from CEO’s, General Managers and other senior executives were carried out to know the role of the different organizations in entrepreneurship growth. It was coupled with a detailed secondary research from existing resources. It has been identified that the ecosystem is swiftly expanding but yet under development and infancy stage where the institutions are prospering. The research is based on country analysis. The paper also shows that the Saudi Arabian government has taken proactive stance in developing the entrepreneurship ecosystem and startup landscape and highlights the transformation of the ecosystem strategy.

  1. Research on Taxi Driver Strategy Game Evolution with Carpooling Detour

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2018-01-01

    Full Text Available For the problem of taxi carpooling detour, this paper studies driver strategy choice with carpooling detour. The model of taxi driver strategy evolution with carpooling detour is built based on prospect theory and evolution game theory. Driver stable strategies are analyzed under the conditions of complaint mechanism and absence of mechanism, respectively. The results show that passenger’s complaint mechanism can effectively decrease the phenomenon of driver refusing passengers with carpooling detour. When probability of passenger complaint reaches a certain level, the stable strategy of driver is to take carpooling detour passengers. Meanwhile, limiting detour distance and easing traffic congestion can decrease the possibility of refusing passengers. These conclusions have a certain guiding significance to formulating taxi policy.

  2. ESA Earth Observation Ground Segment Evolution Strategy

    Science.gov (United States)

    Benveniste, J.; Albani, M.; Laur, H.

    2016-12-01

    One of the key elements driving the evolution of EO Ground Segments, in particular in Europe, has been to enable the creation of added value from EO data and products. This requires the ability to constantly adapt and improve the service to a user base expanding far beyond the `traditional' EO user community of remote sensing specialists. Citizen scientists, the general public, media and educational actors form another user group that is expected to grow. Technological advances, Open Data policies, including those implemented by ESA and the EU, as well as an increasing number of satellites in operations (e.g. Copernicus Sentinels) have led to an enormous increase in available data volumes. At the same time, even with modern network and data handling services, fewer users can afford to bulk-download and consider all potentially relevant data and associated knowledge. The "EO Innovation Europe" concept is being implemented in Europe in coordination between the European Commission, ESA and other European Space Agencies, and industry. This concept is encapsulated in the main ideas of "Bringing the User to the Data" and "Connecting the Users" to complement the traditional one-to-one "data delivery" approach of the past. Both ideas are aiming to better "empower the users" and to create a "sustainable system of interconnected EO Exploitation Platforms", with the objective to enable large scale exploitation of European EO data assets for stimulating innovation and to maximize their impact. These interoperable/interconnected platforms are virtual environments in which the users - individually or collaboratively - have access to the required data sources and processing tools, as opposed to downloading and handling the data `at home'. EO-Innovation Europe has been structured around three elements: an enabling element (acting as a back office), a stimulating element and an outreach element (acting as a front office). Within the enabling element, a "mutualisation" of efforts

  3. A novel harmonic current sharing control strategy for parallel-connected inverters

    DEFF Research Database (Denmark)

    Guan, Yajuan; Guerrero, Josep M.; Savaghebi, Mehdi

    2017-01-01

    A novel control strategy which enables proportional linear and nonlinear loads sharing among paralleled inverters and voltage harmonic suppression is proposed in this paper. The proposed method is based on the autonomous currents sharing controller (ACSC) instead of conventional power droop control...... to provide fast transient response, decoupling control and large stability margin. The current components at different sequences and orders are decomposed by a multi-second-order generalized integrator-based frequency locked loop (MSOGI-FLL). A harmonic-orthogonal-virtual-resistances controller (HOVR......) is used to proportionally share current components at different sequences and orders independently among the paralleled inverters. Proportional resonance controllers tuned at selected frequencies are used to suppress voltage harmonics. Simulations based on two 2.2 kW paralleled three-phase inverters...

  4. Parallel evolution of TCP and B-class genes in Commelinaceae flower bilateral symmetry

    Directory of Open Access Journals (Sweden)

    Preston Jill C

    2012-03-01

    Full Text Available Abstract Background Flower bilateral symmetry (zygomorphy has evolved multiple times independently across angiosperms and is correlated with increased pollinator specialization and speciation rates. Functional and expression analyses in distantly related core eudicots and monocots implicate independent recruitment of class II TCP genes in the evolution of flower bilateral symmetry. Furthermore, available evidence suggests that monocot flower bilateral symmetry might also have evolved through changes in B-class homeotic MADS-box gene function. Methods In order to test the non-exclusive hypotheses that changes in TCP and B-class gene developmental function underlie flower symmetry evolution in the monocot family Commelinaceae, we compared expression patterns of teosinte branched1 (TB1-like, DEFICIENS (DEF-like, and GLOBOSA (GLO-like genes in morphologically distinct bilaterally symmetrical flowers of Commelina communis and Commelina dianthifolia, and radially symmetrical flowers of Tradescantia pallida. Results Expression data demonstrate that TB1-like genes are asymmetrically expressed in tepals of bilaterally symmetrical Commelina, but not radially symmetrical Tradescantia, flowers. Furthermore, DEF-like genes are expressed in showy inner tepals, staminodes and stamens of all three species, but not in the distinct outer tepal-like ventral inner tepals of C. communis. Conclusions Together with other studies, these data suggest parallel recruitment of TB1-like genes in the independent evolution of flower bilateral symmetry at early stages of Commelina flower development, and the later stage homeotic transformation of C. communis inner tepals into outer tepals through the loss of DEF-like gene expression.

  5. Silencing, positive selection and parallel evolution: busy history of primate cytochromes C.

    Directory of Open Access Journals (Sweden)

    Denis Pierron

    Full Text Available Cytochrome c (cyt c participates in two crucial cellular processes, energy production and apoptosis, and unsurprisingly is a highly conserved protein. However, previous studies have reported for the primate lineage (i loss of the paralogous testis isoform, (ii an acceleration and then a deceleration of the amino acid replacement rate of the cyt c somatic isoform, and (iii atypical biochemical behavior of human cyt c. To gain insight into the cause of these major evolutionary events, we have retraced the history of cyt c loci among primates. For testis cyt c, all primate sequences examined carry the same nonsense mutation, which suggests that silencing occurred before the primates diversified. For somatic cyt c, maximum parsimony, maximum likelihood, and Bayesian phylogenetic analyses yielded the same tree topology. The evolutionary analyses show that a fast accumulation of non-synonymous mutations (suggesting positive selection occurred specifically on the anthropoid lineage root and then continued in parallel on the early catarrhini and platyrrhini stems. Analysis of evolutionary changes using the 3D structure suggests they are focused on the respiratory chain rather than on apoptosis or other cyt c functions. In agreement with previous biochemical studies, our results suggest that silencing of the cyt c testis isoform could be linked with the decrease of primate reproduction rate. Finally, the evolution of cyt c in the two sister anthropoid groups leads us to propose that somatic cyt c evolution may be related both to COX evolution and to the convergent brain and body mass enlargement in these two anthropoid clades.

  6. Development and application of efficient strategies for parallel magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Breuer, F.

    2006-07-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image

  7. Development and application of efficient strategies for parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Breuer, F.

    2006-01-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image artifacts

  8. Reliability optimization of series–parallel systems with mixed redundancy strategy in subsystems

    International Nuclear Information System (INIS)

    Abouei Ardakan, Mostafa; Zeinal Hamadani, Ali

    2014-01-01

    Traditionally in redundancy allocation problem (RAP), it is assumed that the redundant components are used based on a predefined active or standby strategies. Recently, some studies consider the situation that both active and standby strategies can be used in a specific system. However, these researches assume that the redundancy strategy for each subsystem can be either active or standby and determine the best strategy for these subsystems by using a proper mathematical model. As an extension to this assumption, a novel strategy, that is a combination of traditional active and standby strategies, is introduced. The new strategy is called mixed strategy which uses both active and cold-standby strategies in one subsystem simultaneously. Therefore, the problem is to determine the component type, redundancy level, number of active and cold-standby units for each subsystem in order to maximize the system reliability. To have a more practical model, the problem is formulated with imperfect switching of cold-standby redundant components and k-Erlang time-to-failure (TTF) distribution. As the optimization of RAP belongs to NP-hard class of problems, a genetic algorithm (GA) is developed. The new strategy and proposed GA are implemented on a well-known test problem in the literature which leads to interesting results. - Highlights: • In this paper the redundancy allocation problem (RAP) for a series–parallel system is considered. • Traditionally there are two main strategies for redundant component namely active and standby. • In this paper a new redundancy strategy which is called “Mixed” redundancy strategy is introduced. • Computational experiments demonstrate that implementing the new strategy lead to interesting results

  9. Automatic Clustering Using FSDE-Forced Strategy Differential Evolution

    Science.gov (United States)

    Yasid, A.

    2018-01-01

    Clustering analysis is important in datamining for unsupervised data, cause no adequate prior knowledge. One of the important tasks is defining the number of clusters without user involvement that is known as automatic clustering. This study intends on acquiring cluster number automatically utilizing forced strategy differential evolution (AC-FSDE). Two mutation parameters, namely: constant parameter and variable parameter are employed to boost differential evolution performance. Four well-known benchmark datasets were used to evaluate the algorithm. Moreover, the result is compared with other state of the art automatic clustering methods. The experiment results evidence that AC-FSDE is better or competitive with other existing automatic clustering algorithm.

  10. Evolution of Parallel Spindles Like genes in plants and highlight of unique domain architecture#

    Directory of Open Access Journals (Sweden)

    Consiglio Federica M

    2011-03-01

    Full Text Available Abstract Background Polyploidy has long been recognized as playing an important role in plant evolution. In flowering plants, the major route of polyploidization is suggested to be sexual through gametes with somatic chromosome number (2n. Parallel Spindle1 gene in Arabidopsis thaliana (AtPS1 was recently demonstrated to control spindle orientation in the 2nd division of meiosis and, when mutated, to induce 2n pollen. Interestingly, AtPS1 encodes a protein with a FHA domain and PINc domain putatively involved in RNA decay (i.e. Nonsense Mediated mRNA Decay. In potato, 2n pollen depending on parallel spindles was described long time ago but the responsible gene has never been isolated. The knowledge derived from AtPS1 as well as the availability of genome sequences makes it possible to isolate potato PSLike (PSL and to highlight the evolution of PSL family in plants. Results Our work leading to the first characterization of PSLs in potato showed a greater PSL complexity in this species respect to Arabidopsis thaliana. Indeed, a genomic PSL locus and seven cDNAs affected by alternative splicing have been cloned. In addition, the occurrence of at least two other PSL loci in potato was suggested by the sequence comparison of alternatively spliced transcripts. Phylogenetic analysis on 20 Viridaeplantae showed the wide distribution of PSLs throughout the species and the occurrence of multiple copies only in potato and soybean. The analysis of PSLFHA and PSLPINc domains evidenced that, in terms of secondary structure, a major degree of variability occurred in PINc domain respect to FHA. In terms of specific active sites, both domains showed diversification among plant species that could be related to a functional diversification among PSL genes. In addition, some specific active sites were strongly conserved among plants as supported by sequence alignment and by evidence of negative selection evaluated as difference between non-synonymous and

  11. PARALLEL EVOLUTION OF QUASI-SEPARATRIX LAYERS AND ACTIVE REGION UPFLOWS

    Energy Technology Data Exchange (ETDEWEB)

    Mandrini, C. H.; Cristiani, G. D.; Nuevo, F. A.; Vásquez, A. M. [Instituto de Astronomía y Física del Espacio (IAFE), UBA-CONICET, CC. 67, Suc. 28 Buenos Aires, 1428 (Argentina); Baker, D.; Driel-Gesztelyi, L. van [UCL-Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey, RH5 6NT (United Kingdom); Démoulin, P.; Pick, M. [Observatoire de Paris, LESIA, UMR 8109 (CNRS), F-92195 Meudon Principal Cedex (France); Vargas Domínguez, S. [Observatorio Astronómico Nacional, Universidad Nacional de Colombia, Bogotá (Colombia)

    2015-08-10

    Persistent plasma upflows were observed with Hinode’s EUV Imaging Spectrometer (EIS) at the edges of active region (AR) 10978 as it crossed the solar disk. We analyze the evolution of the photospheric magnetic and velocity fields of the AR, model its coronal magnetic field, and compute the location of magnetic null-points and quasi-sepratrix layers (QSLs) searching for the origin of EIS upflows. Magnetic reconnection at the computed null points cannot explain all of the observed EIS upflow regions. However, EIS upflows and QSLs are found to evolve in parallel, both temporarily and spatially. Sections of two sets of QSLs, called outer and inner, are found associated to EIS upflow streams having different characteristics. The reconnection process in the outer QSLs is forced by a large-scale photospheric flow pattern, which is present in the AR for several days. We propose a scenario in which upflows are observed, provided that a large enough asymmetry in plasma pressure exists between the pre-reconnection loops and lasts as long as a photospheric forcing is at work. A similar mechanism operates in the inner QSLs; in this case, it is forced by the emergence and evolution of the bipoles between the two main AR polarities. Our findings provide strong support for the results from previous individual case studies investigating the role of magnetic reconnection at QSLs as the origin of the upflowing plasma. Furthermore, we propose that persistent reconnection along QSLs does not only drive the EIS upflows, but is also responsible for the continuous metric radio noise-storm observed in AR 10978 along its disk transit by the Nançay Radio Heliograph.

  12. Parallel or convergent evolution in human population genomic data revealed by genotype networks.

    Science.gov (United States)

    R Vahdati, Ali; Wagner, Andreas

    2016-08-02

    Genotype networks are representations of genetic variation data that are complementary to phylogenetic trees. A genotype network is a graph whose nodes are genotypes (DNA sequences) with the same broadly defined phenotype. Two nodes are connected if they differ in some minimal way, e.g., in a single nucleotide. We analyze human genome variation data from the 1,000 genomes project, and construct haploid genotype (haplotype) networks for 12,235 protein coding genes. The structure of these networks varies widely among genes, indicating different patterns of variation despite a shared evolutionary history. We focus on those genes whose genotype networks show many cycles, which can indicate homoplasy, i.e., parallel or convergent evolution, on the sequence level. For 42 genes, the observed number of cycles is so large that it cannot be explained by either chance homoplasy or recombination. When analyzing possible explanations, we discovered evidence for positive selection in 21 of these genes and, in addition, a potential role for constrained variation and purifying selection. Balancing selection plays at most a small role. The 42 genes with excess cycles are enriched in functions related to immunity and response to pathogens. Genotype networks are representations of genetic variation data that can help understand unusual patterns of genomic variation.

  13. A software for parameter optimization with Differential Evolution Entirely Parallel method

    Directory of Open Access Journals (Sweden)

    Konstantin Kozlov

    2016-08-01

    Full Text Available Summary. Differential Evolution Entirely Parallel (DEEP package is a software for finding unknown real and integer parameters in dynamical models of biological processes by minimizing one or even several objective functions that measure the deviation of model solution from data. Numerical solutions provided by the most efficient global optimization methods are often problem-specific and cannot be easily adapted to other tasks. In contrast, DEEP allows a user to describe both mathematical model and objective function in any programming language, such as R, Octave or Python and others. Being implemented in C, DEEP demonstrates as good performance as the top three methods from CEC-2014 (Competition on evolutionary computation benchmark and was successfully applied to several biological problems. Availability. DEEP method is an open source and free software distributed under the terms of GPL licence version 3. The sources are available at http://deepmethod.sourceforge.net/ and binary packages for Fedora GNU/Linux are provided for RPM package manager at https://build.opensuse.org/project/repositories/home:mackoel:compbio.

  14. Engine-start Control Strategy of P2 Parallel Hybrid Electric Vehicle

    Science.gov (United States)

    Xiangyang, Xu; Siqi, Zhao; Peng, Dong

    2017-12-01

    A smooth and fast engine-start process is important to parallel hybrid electric vehicles with an electric motor mounted in front of the transmission. However, there are some challenges during the engine-start control. Firstly, the electric motor must simultaneously provide a stable driving torque to ensure the drivability and a compensative torque to drag the engine before ignition. Secondly, engine-start time is a trade-off control objective because both fast start and smooth start have to be considered. To solve these problems, this paper first analyzed the resistance of the engine start process, and established a physic model in MATLAB/Simulink. Then a model-based coordinated control strategy among engine, motor and clutch was developed. Two basic control strategy during fast start and smooth start process were studied. Simulation results showed that the control objectives were realized by applying given control strategies, which can meet different requirement from the driver.

  15. Kinematic Identification of Parallel Mechanisms by a Divide and Conquer Strategy

    DEFF Research Database (Denmark)

    Durango, Sebastian; Restrepo, David; Ruiz, Oscar

    2010-01-01

    using the inverse calibration method. The identification poses are selected optimizing the observability of the kinematic parameters from a Jacobian identification matrix. With respect to traditional identification methods the main advantages of the proposed Divide and Conquer kinematic identification...... strategy are: (i) reduction of the kinematic identification computational costs, (ii) improvement of the numerical efficiency of the kinematic identification algorithm and, (iii) improvement of the kinematic identification results. The contributions of the paper are: (i) The formalization of the inverse...... calibration method as the Divide and Conquer strategy for the kinematic identification of parallel symmetrical mechanisms and, (ii) a new kinematic identification protocol based on the Divide and Conquer strategy. As an application of the proposed kinematic identification protocol the identification...

  16. A Novel Reconfiguration Strategy of a Delta-Type Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Albert Lester Balmaceda-Santamaría

    2016-02-01

    Full Text Available This work introduces a novel reconfiguration strategy for a Delta-type parallel robot. The robot at hand, whose patent is pending, is equipped with an intermediate mechanism that allows for modifying the operational Cartesian workspace. Furthermore, singularities of the robot may be ameliorated owing to the inherent kinematic redundancy introduced by four actuable kinematic joints. The velocity and acceleration analyses of the parallel manipulator are carried out by resorting to reciprocal-screw theory. Finally, the manipulability of the new robot is investigated based on the computation of the condition number associated with the active Jacobian matrix, a well-known procedure. The results obtained show improved performance of the robot introduced when compared with results generated for another Delta-type robot.

  17. Proxy-equation paradigm: A strategy for massively parallel asynchronous computations

    Science.gov (United States)

    Mittal, Ankita; Girimaji, Sharath

    2017-09-01

    Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.

  18. Evolution of Strategies for "Prisoner's Dilemma" using Genetic Algorithm

    OpenAIRE

    Heinz, Jan

    2010-01-01

    The subject of this thesis is the software application "Prisoner's Dilemma". The program creates a population of players of "Prisoner's Dilemma", has them play against each other, and - based on their results - performs an evolution of their strategies by means of a genetic algorithm (selection, mutation, and crossover). The program was written in Microsoft Visual Studio, in the C++ programming language, and its interface makes use of the .NET Framework. The thesis includes examples of strate...

  19. Map-Based Power-Split Strategy Design with Predictive Performance Optimization for Parallel Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jixiang Fan

    2015-09-01

    Full Text Available In this paper, a map-based optimal energy management strategy is proposed to improve the consumption economy of a plug-in parallel hybrid electric vehicle. In the design of the maps, which provide both the torque split between engine and motor and the gear shift, not only the current vehicle speed and power demand, but also the optimality based on the predicted trajectory of vehicle dynamics are considered. To seek the optimality, the equivalent consumption, which trades off the fuel and electricity usages, is chosen as the cost function. Moreover, in order to decrease the model errors in the process of optimization conducted in the discrete time domain, the variational integrator is employed to calculate the evolution of the vehicle dynamics. To evaluate the proposed energy management strategy, the simulation results performed on a professional GT-Suit simulator are demonstrated and the comparison to a real-time optimization method is also given to show the advantage of the proposed off-line optimization approach.

  20. Battery parameterisation based on differential evolution via a boundary evolution strategy

    DEFF Research Database (Denmark)

    Yang, Guangya

    2013-01-01

    the advances of evolutionary algorithms (EAs). Differential evolution (DE) is selected and modified to parameterise an equivalent circuit model of lithium-ion batteries. A boundary evolution strategy (BES) is developed and incorporated into the DE to update the parameter boundaries during the parameterisation......, as the equivalent circuit model is an abstract map of the battery electric characteristics, the determination of the possible ranges of parameters can be a challenging task. In this paper, an efficient yet easy to implement method is proposed to parameterise the equivalent circuit model of batteries utilising...

  1. Decomposition and parallelization strategies for solving large-scale MDO problems

    Energy Technology Data Exchange (ETDEWEB)

    Grauer, M.; Eschenauer, H.A. [Research Center for Multidisciplinary Analyses and Applied Structural Optimization, FOMAAS, Univ. of Siegen (Germany)

    2007-07-01

    During previous years, structural optimization has been recognized as a useful tool within the discriptiones of engineering and economics. However, the optimization of large-scale systems or structures is impeded by an immense solution effort. This was the reason to start a joint research and development (R and D) project between the Institute of Mechanics and Control Engineering and the Information and Decision Sciences Institute within the Research Center for Multidisciplinary Analyses and Applied Structural Optimization (FOMAAS) on cluster computing for parallel and distributed solution of multidisciplinary optimization (MDO) problems based on the OpTiX-Workbench. Here the focus of attention will be put on coarsegrained parallelization and its implementation on clusters of workstations. A further point of emphasis was laid on the development of a parallel decomposition strategy called PARDEC, for the solution of very complex optimization problems which cannot be solved efficiently by sequential integrated optimization. The use of the OptiX-Workbench together with the FEM ground water simulation system FEFLOW is shown for a special water management problem. (orig.)

  2. Risk evaluation mitigation strategies: the evolution of risk management policy.

    Science.gov (United States)

    Hollingsworth, Kristen; Toscani, Michael

    2013-04-01

    The United States Food and Drug Administration (FDA) has the primary regulatory responsibility to ensure that medications are safe and effective both prior to drug approval and while the medication is being actively marketed by manufacturers. The responsibility for safe medications prior to marketing was signed into law in 1938 under the Federal Food, Drug, and Cosmetic Act; however, a significant risk management evolution has taken place since 1938. Additional federal rules, entitled the Food and Drug Administration Amendments Act, were established in 2007 and extended the government's oversight through the addition of a Risk Evaluation and Mitigation Strategy (REMS) for certain drugs. REMS is a mandated strategy to manage a known or potentially serious risk associated with a medication or biological product. Reasons for this extension of oversight were driven primarily by the FDA's movement to ensure that patients and providers are better informed of drug therapies and their specific benefits and risks prior to initiation. This article provides an historical perspective of the evolution of medication risk management policy and includes a review of REMS programs, an assessment of the positive and negative aspects of REMS, and provides suggestions for planning and measuring outcomes. In particular, this publication presents an overview of the evolution of the REMS program and its implications.

  3. Evolution strategy based optimal chiller loading for saving energy

    International Nuclear Information System (INIS)

    Chang, Y.-C.; Lee, C.-Y.; Chen, C.-R.; Chou, C.-J.; Chen, W.-H.; Chen, W.-H.

    2009-01-01

    This study employs evolution strategy (ES) to solve optimal chiller loading (OCL) problem. ES overcomes the flaw that Lagrangian method is not adaptable for solving OCL as the power consumption models or the kW-PLR (partial load ratio) curves include convex functions and concave functions simultaneously. The complicated process of evolution by the genetic algorithm (GA) method for solving OCL can also be simplified by the ES method. This study uses the PLR of chiller as the variable to be solved for the decoupled air conditioning system. After analysis and comparison of the case study, it has been concluded that this method not only solves the problems of Lagrangian method and GA method, but also produces results with high accuracy within a rapid timeframe. It can be perfectly applied to the operation of air conditioning systems

  4. A general parallelization strategy for random path based geostatistical simulation methods

    Science.gov (United States)

    Mariethoz, Grégoire

    2010-07-01

    The size of simulation grids used for numerical models has increased by many orders of magnitude in the past years, and this trend is likely to continue. Efficient pixel-based geostatistical simulation algorithms have been developed, but for very large grids and complex spatial models, the computational burden remains heavy. As cluster computers become widely available, using parallel strategies is a natural step for increasing the usable grid size and the complexity of the models. These strategies must profit from of the possibilities offered by machines with a large number of processors. On such machines, the bottleneck is often the communication time between processors. We present a strategy distributing grid nodes among all available processors while minimizing communication and latency times. It consists in centralizing the simulation on a master processor that calls other slave processors as if they were functions simulating one node every time. The key is to decouple the sending and the receiving operations to avoid synchronization. Centralization allows having a conflict management system ensuring that nodes being simulated simultaneously do not interfere in terms of neighborhood. The strategy is computationally efficient and is versatile enough to be applicable to all random path based simulation methods.

  5. Multilevel parallel strategy on Monte Carlo particle transport for the large-scale full-core pin-by-pin simulations

    International Nuclear Information System (INIS)

    Zhang, B.; Li, G.; Wang, W.; Shangguan, D.; Deng, L.

    2015-01-01

    This paper introduces the Strategy of multilevel hybrid parallelism of JCOGIN Infrastructure on Monte Carlo Particle Transport for the large-scale full-core pin-by-pin simulations. The particle parallelism, domain decomposition parallelism and MPI/OpenMP parallelism are designed and implemented. By the testing, JMCT presents the parallel scalability of JCOGIN, which reaches the parallel efficiency 80% on 120,000 cores for the pin-by-pin computation of the BEAVRS benchmark. (author)

  6. The evolution of intellectual property strategy in innovation ecosystems

    DEFF Research Database (Denmark)

    Holgersson, Marcus; Granstrand, Ove; Bogers, Marcel

    2017-01-01

    In this article, we attempt to extend and nuance the debate on intellectual property (IP) strategy, appropriation, and open innovation in dynamic and systemic innovation contexts. We present the case of four generations of mobile telecommunications systems (covering the period 1980-2015), and des......In this article, we attempt to extend and nuance the debate on intellectual property (IP) strategy, appropriation, and open innovation in dynamic and systemic innovation contexts. We present the case of four generations of mobile telecommunications systems (covering the period 1980......-2015), and describe and analyze the co-evolution of strategic IP management and innovation ecosystems. Throughout this development, technologies and technological relationships were governed with different and shifting degrees of formality. Simultaneously, firms differentiated technology accessibility across actors...

  7. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    Science.gov (United States)

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  8. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    Directory of Open Access Journals (Sweden)

    Yu-Shuang Dong

    2014-01-01

    Full Text Available The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  9. Torque Split Strategy for Parallel Hybrid Electric Vehicles with an Integrated Starter Generator

    Directory of Open Access Journals (Sweden)

    Zhumu Fu

    2014-01-01

    Full Text Available This paper presents a torque split strategy for parallel hybrid electric vehicles with an integrated starter generator (ISG-PHEV by using fuzzy logic control. By combining the efficiency map and the optimum torque curve of the internal combustion engine (ICE with the state of charge (SOC of the batteries, the torque split strategy is designed, which manages the ICE within its peak efficiency region. Taking the quantified ICE torque, the quantified SOC of the batteries, and the quantified ICE speed as inputs, and regarding the output torque demanded on the ICE as an output, a fuzzy logic controller (FLC with relevant fuzzy rules has been developed to determine the optimal torque distribution among the ICE, the ISG, and the electric motor/generator (EMG effectively. The simulation results reveal that, compared with the conventional torque control strategy which uses rule-based controller (RBC in different driving cycles, the proposed FLC improves the fuel economy of the ISG-PHEV, increases the efficiency of the ICE, and maintains batteries SOC within its operation range more availably.

  10. A Parallel Energy-Sharing Control Strategy for Fuel Cell Hybrid Vehicle

    Directory of Open Access Journals (Sweden)

    Nik Rumzi Nik Idris

    2011-08-01

    Full Text Available This paper presents a parallel energy-sharing control strategy for the application of fuel cell hybrid vehicles (FCHVs. The hybrid source discussed consists of a fuel cells (FCs generator and energy storage units (ESUs which composed by the battery and ultracapacitor (UC modules. A direct current (DC bus is used to interface between the energy sources and the electric vehicles (EV propulsion system (loads. Energy sources are connected to the DC bus using of power electronics converters. A total of six control loops are designed in the supervisory system in order to regulate the DC bus voltage, control of current flow and to monitor the state of charge (SOC of each energy storage device at the same time. Proportional plus integral (PI controllers are employed to regulate the output from each control loop referring to their reference signals. The proposed energy control system is simulated in MATLAB/Simulink environment. Results indicated that the proposed parallel energy-sharing control system is capable to provide a practical hybrid vehicle in respond to the vehicle traction response and avoids the FC and battery from overstressed at the same time.

  11. A massively parallel strategy for STR marker development, capture, and genotyping.

    Science.gov (United States)

    Kistler, Logan; Johnson, Stephen M; Irwin, Mitchell T; Louis, Edward E; Ratan, Aakrosh; Perry, George H

    2017-09-06

    Short tandem repeat (STR) variants are highly polymorphic markers that facilitate powerful population genetic analyses. STRs are especially valuable in conservation and ecological genetic research, yielding detailed information on population structure and short-term demographic fluctuations. Massively parallel sequencing has not previously been leveraged for scalable, efficient STR recovery. Here, we present a pipeline for developing STR markers directly from high-throughput shotgun sequencing data without a reference genome, and an approach for highly parallel target STR recovery. We employed our approach to capture a panel of 5000 STRs from a test group of diademed sifakas (Propithecus diadema, n = 3), endangered Malagasy rainforest lemurs, and we report extremely efficient recovery of targeted loci-97.3-99.6% of STRs characterized with ≥10x non-redundant sequence coverage. We then tested our STR capture strategy on P. diadema fecal DNA, and report robust initial results and suggestions for future implementations. In addition to STR targets, this approach also generates large, genome-wide single nucleotide polymorphism (SNP) panels from flanking regions. Our method provides a cost-effective and scalable solution for rapid recovery of large STR and SNP datasets in any species without needing a reference genome, and can be used even with suboptimal DNA more easily acquired in conservation and ecological studies. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.

  12. Parallel Conjugate Gradient: Effects of Ordering Strategies, Programming Paradigms, and Architectural Platforms

    Science.gov (United States)

    Oliker, Leonid; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. A sparse matrix-vector multiply (SPMV) usually accounts for most of the floating-point operations within a CG iteration. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and SPMV using different programming paradigms and architectures. Results show that for this class of applications, ordering significantly improves overall performance, that cache reuse may be more important than reducing communication, and that it is possible to achieve message passing performance using shared memory constructs through careful data ordering and distribution. However, a multi-threaded implementation of CG on the Tera MTA does not require special ordering or partitioning to obtain high efficiency and scalability.

  13. Convergent, Parallel and Correlated Evolution of Trophic Morphologies in the Subfamily Schizothoracinae from the Qinghai-Tibetan Plateau

    Science.gov (United States)

    Qi, Delin; Chao, Yan; Guo, Songchang; Zhao, Lanying; Li, Taiping; Wei, Fulei; Zhao, Xinquan

    2012-01-01

    Schizothoracine fishes distributed in the water system of the Qinghai-Tibetan plateau (QTP) and adjacent areas are characterized by being highly adaptive to the cold and hypoxic environment of the plateau, as well as by a high degree of diversity in trophic morphology due to resource polymorphisms. Although convergent and parallel evolution are prevalent in the organisms of the QTP, it remains unknown whether similar evolutionary patterns have occurred in the schizothoracine fishes. Here, we constructed for the first time a tentative molecular phylogeny of the schizothoracine fishes based on the complete sequences of the cytochrome b gene. We employed this molecular phylogenetic framework to examine the evolution of trophic morphologies. We used Pagel's maximum likelihood method to estimate the evolutionary associations of trophic morphologies and food resource use. Our results showed that the molecular and published morphological phylogenies of Schizothoracinae are partially incongruent with respect to some intergeneric relationships. The phylogenetic results revealed that four character states of five trophic morphologies and of food resource use evolved at least twice during the diversification of the subfamily. State transitions are the result of evolutionary patterns including either convergence or parallelism or both. Furthermore, our analyses indicate that some characters of trophic morphologies in the Schizothoracinae have undergone correlated evolution, which are somewhat correlated with different food resource uses. Collectively, our results reveal new examples of convergent and parallel evolution in the organisms of the QTP. The adaptation to different trophic niches through the modification of trophic morphologies and feeding behaviour as found in the schizothoracine fishes may account for the formation and maintenance of the high degree of diversity and radiations in fish communities endemic to QTP. PMID:22470515

  14. New strategy for eliminating zero-sequence circulating current between parallel operating three-level NPC voltage source inverters

    DEFF Research Database (Denmark)

    Li, Kai; Dong, Zhenhua; Wang, Xiaodong

    2018-01-01

    buses, that are operating in parallel. First, an equivalent model of ZSCC in a three-phase three-level NPC inverter paralleled system is developed. Second, on the basis of the analysis of the excitation source of ZSCCs, i.e., the difference in common mode voltages (CMVs) between paralleled inverters......, the ZCMV-PWM method is presented to reduce CMVs, and a simple electric circuit is adopted to control ZSCCs and neutral point potential. Finally, simulation and experiment are conducted to illustrate effectiveness of the proposed strategy. Results show that ZSCCs between paralleled inverters can...... be eliminated effectively under steady and dynamic states. Moreover, the proposed strategy exhibits the advantage of not requiring carrier synchronization. It can be utilized in inverters with different types of filter....

  15. Selectivity of Nanocrystalline IrO2-Based Catalysts in Parallel Chlorine and Oxygen Evolution

    Czech Academy of Sciences Publication Activity Database

    Kuznetsova, Elizaveta; Petrykin, Valery; Sunde, S.; Krtil, Petr

    2015-01-01

    Roč. 6, č. 2 (2015), s. 198-210 ISSN 1868-2529 EU Projects: European Commission(XE) 214936 Institutional support: RVO:61388955 Keywords : iridium dioxide * oxygen evolution * chlorine evolution Subject RIV: CG - Electrochemistry Impact factor: 2.347, year: 2015

  16. The evolution of unconditional strategies via the 'multiplier effect'.

    Science.gov (United States)

    McNamara, John M; Dall, Sasha R X

    2011-03-01

    Ostensibly, it makes sense in a changeable world to condition behaviour and development on information when it is available. Nevertheless, unconditional behavioural and life history strategies are widespread. Here, we show how intergenerational effects can limit the evolutionary value of responding to reliable environmental cues, and thus favour the evolutionary persistence of otherwise paradoxical unconditional strategies. While cue-ignoring genotypes do poorly in the wrong environments, in the right environment they will leave many copies of themselves, which will themselves leave many copies, and so on, leading genotypes to accumulate in habitats in which they do well. We call this 'The Multiplier Effect'. We explore the consequences of the multiplier effect by focussing on the ecologically important phenomenon of natal philopatry. We model the environment as a large number of temporally varying breeding sites connected by natal dispersal between sites. Our aim is to identify which aspects of an environment promote the multiplier effect. We show, if sites remain connected through some background level of 'accidental' dispersal, unconditional natal philopatry can evolve even when there is density dependence (with its accompanying kin competition effects), and cues that are only mildly erroneous. Thus, the multiplier effect may underpin the evolution and maintenance of unconditional strategies such as natal philopatry in many biological systems. © 2011 Blackwell Publishing Ltd/CNRS.

  17. Many-Objective Particle Swarm Optimization Using Two-Stage Strategy and Parallel Cell Coordinate System.

    Science.gov (United States)

    Hu, Wang; Yen, Gary G; Luo, Guangchun

    2017-06-01

    It is a daunting challenge to balance the convergence and diversity of an approximate Pareto front in a many-objective optimization evolutionary algorithm. A novel algorithm, named many-objective particle swarm optimization with the two-stage strategy and parallel cell coordinate system (PCCS), is proposed in this paper to improve the comprehensive performance in terms of the convergence and diversity. In the proposed two-stage strategy, the convergence and diversity are separately emphasized at different stages by a single-objective optimizer and a many-objective optimizer, respectively. A PCCS is exploited to manage the diversity, such as maintaining a diverse archive, identifying the dominance resistant solutions, and selecting the diversified solutions. In addition, a leader group is used for selecting the global best solutions to balance the exploitation and exploration of a population. The experimental results illustrate that the proposed algorithm outperforms six chosen state-of-the-art designs in terms of the inverted generational distance and hypervolume over the DTLZ test suite.

  18. The evolution of Soviet forces, strategy, and command

    International Nuclear Information System (INIS)

    Ball, D.; Bethe, H.A.; Blair, B.G.; Bracken, P.; Carter, A.B.; Dickinson, H.; Garwin, R.L.; Holloway, D.; Kendall, H.W.

    1988-01-01

    This paper reports on the evolution of Soviet forces, strategy and command. Soviet leaders have repeatedly emphasized that it would be tantamount to suicide to start a nuclear war. Mutual deterrence, however, does not make nuclear was impossible. The danger remains that a large-scale nuclear was could start inadvertently in an intense crisis, or by escalation out of a conventional war, or as an unforeseen combination of these. For these reasons crisis management has become a central issue in the United States, but the standard Soviet response to this Western interest has been to say that what is needed is crisis avoidance, not recipes for brinkmanship masquerading under another name. There is much sense in this view. Nevertheless, this demeanor does not mean that the Soviet Union has given no thought to the danger that a crisis might lead to nuclear war, only that Soviet categories for thinking about such matters differ from those employed in the United States

  19. Using 2-Opt based evolution strategy for travelling salesman problem

    Directory of Open Access Journals (Sweden)

    Kenan Karagul

    2016-03-01

    Full Text Available Harmony search algorithm that matches the (µ+1 evolution strategy, is a heuristic method simulated by the process of music improvisation. In this paper, a harmony search algorithm is directly used for the travelling salesman problem. Instead of conventional selection operators such as roulette wheel, the transformation of real number values of harmony search algorithm to order index of vertex representation and improvement of solutions are obtained by using the 2-Opt local search algorithm. Then, the obtained algorithm is tested on two different parameter groups of TSPLIB. The proposed method is compared with classical 2-Opt which randomly started at each step and best known solutions of test instances from TSPLIB. It is seen that the proposed algorithm offers valuable solutions.

  20. Defining the best parallelization strategy for a diphasic compressible fluid mechanics code

    International Nuclear Information System (INIS)

    Berthou, Jean-Yves; Fayolle, Eric; Faucher, Eric; Scliffet, Laurent

    2000-01-01

    parallelization strategy we recommend for codes comparable to ECOSS. (author)

  1. Defining the best parallelization strategy for a diphasic compressible fluid mechanics code

    Energy Technology Data Exchange (ETDEWEB)

    Berthou, Jean-Yves; Fayolle, Eric [Electricite de France, Research and Development division, Modeling and Information Technologies Department, CLAMART CEDEX (France); Faucher, Eric; Scliffet, Laurent [Electricite de France, Research and Development Division, Mechanics and Component Technology Branch Department, Moret sur Loing (France)

    2000-09-01

    parallelization strategy we recommend for codes comparable to ECOSS. (author)

  2. Parallel Genetic and Phenotypic Evolution of DNA Superhelicity in Experimental Populations of Escherichia coli

    DEFF Research Database (Denmark)

    Crozat, Estelle; Winkworth, Cynthia; Gaffé, Joël

    2010-01-01

    , indicate that changes in DNA superhelicity have been important in the evolution of these populations. Surprisingly, however, most of the evolved alleles we tested had either no detectable or slightly deleterious effects on fitness, despite these signatures of positive selection.......DNA supercoiling is the master function that interconnects chromosome structure and global gene transcription. This function has recently been shown to be under strong selection in Escherichia coli. During the evolution of 12 initially identical populations propagated in a defined environment...

  3. Convergent Evolution of Hemoglobin Function in High-Altitude Andean Waterfowl Involves Limited Parallelism at the Molecular Sequence Level.

    Directory of Open Access Journals (Sweden)

    Chandrasekhar Natarajan

    2015-12-01

    Full Text Available A fundamental question in evolutionary genetics concerns the extent to which adaptive phenotypic convergence is attributable to convergent or parallel changes at the molecular sequence level. Here we report a comparative analysis of hemoglobin (Hb function in eight phylogenetically replicated pairs of high- and low-altitude waterfowl taxa to test for convergence in the oxygenation properties of Hb, and to assess the extent to which convergence in biochemical phenotype is attributable to repeated amino acid replacements. Functional experiments on native Hb variants and protein engineering experiments based on site-directed mutagenesis revealed the phenotypic effects of specific amino acid replacements that were responsible for convergent increases in Hb-O2 affinity in multiple high-altitude taxa. In six of the eight taxon pairs, high-altitude taxa evolved derived increases in Hb-O2 affinity that were caused by a combination of unique replacements, parallel replacements (involving identical-by-state variants with independent mutational origins in different lineages, and collateral replacements (involving shared, identical-by-descent variants derived via introgressive hybridization. In genome scans of nucleotide differentiation involving high- and low-altitude populations of three separate species, function-altering amino acid polymorphisms in the globin genes emerged as highly significant outliers, providing independent evidence for adaptive divergence in Hb function. The experimental results demonstrate that convergent changes in protein function can occur through multiple historical paths, and can involve multiple possible mutations. Most cases of convergence in Hb function did not involve parallel substitutions and most parallel substitutions did not affect Hb-O2 affinity, indicating that the repeatability of phenotypic evolution does not require parallelism at the molecular level.

  4. Parallel Evolution of a Type IV Secretion System in Radiating Lineages of the Host-Restricted Bacterial Pathogen Bartonella

    Science.gov (United States)

    Engel, Philipp; Salzburger, Walter; Liesch, Marius; Chang, Chao-Chin; Maruyama, Soichi; Lanz, Christa; Calteau, Alexandra; Lajus, Aurélie; Médigue, Claudine; Schuster, Stephan C.; Dehio, Christoph

    2011-01-01

    Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS), and thereby translocated Bartonella effector proteins (Beps), evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial pathogens

  5. Parallel evolution of a type IV secretion system in radiating lineages of the host-restricted bacterial pathogen Bartonella.

    Directory of Open Access Journals (Sweden)

    Philipp Engel

    2011-02-01

    Full Text Available Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS, and thereby translocated Bartonella effector proteins (Beps, evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial

  6. Parallel evolution of a type IV secretion system in radiating lineages of the host-restricted bacterial pathogen Bartonella.

    Science.gov (United States)

    Engel, Philipp; Salzburger, Walter; Liesch, Marius; Chang, Chao-Chin; Maruyama, Soichi; Lanz, Christa; Calteau, Alexandra; Lajus, Aurélie; Médigue, Claudine; Schuster, Stephan C; Dehio, Christoph

    2011-02-10

    Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS), and thereby translocated Bartonella effector proteins (Beps), evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial pathogens

  7. Novel Differential Current Control Strategy Based on a Modified Three-Level SVPWM for Two Parallel-Connected Inverters

    DEFF Research Database (Denmark)

    Zorig, Abdelmalik; Barkat, Said; Belkheiri, Mohammed

    2017-01-01

    Recently, parallel inverters have been investigated to provide multilevel characteristics besides their advantage to increase the power system capacity, reliability, and efficiency. However, the issue of differential currents imbalance remains a challenge in parallel inverter operation. The distr......Recently, parallel inverters have been investigated to provide multilevel characteristics besides their advantage to increase the power system capacity, reliability, and efficiency. However, the issue of differential currents imbalance remains a challenge in parallel inverter operation....... The distribution of switching vectors of the resulting multilevel topology has a certain degree of self-differential current balancing properties. Nevertheless, the method alone is not sufficient to maintain balanced differential current in practical applications. This paper proposes a closed-loop differential...... current control method by introducing a control variable adjusting the dwell time of the selected switching vectors and thus maintaining the differential currents balanced without affecting the overall system performance. The control strategy, including distribution of switching sequence, selection...

  8. Parallel evolution under chemotherapy pressure in 29 breast cancer cell lines results in dissimilar mechanisms of resistance.

    Directory of Open Access Journals (Sweden)

    Bálint Tegze

    Full Text Available BACKGROUND: Developing chemotherapy resistant cell lines can help to identify markers of resistance. Instead of using a panel of highly heterogeneous cell lines, we assumed that truly robust and convergent pattern of resistance can be identified in multiple parallel engineered derivatives of only a few parental cell lines. METHODS: Parallel cell populations were initiated for two breast cancer cell lines (MDA-MB-231 and MCF-7 and these were treated independently for 18 months with doxorubicin or paclitaxel. IC50 values against 4 chemotherapy agents were determined to measure cross-resistance. Chromosomal instability and karyotypic changes were determined by cytogenetics. TaqMan RT-PCR measurements were performed for resistance-candidate genes. Pgp activity was measured by FACS. RESULTS: All together 16 doxorubicin- and 13 paclitaxel-treated cell lines were developed showing 2-46 fold and 3-28 fold increase in resistance, respectively. The RT-PCR and FACS analyses confirmed changes in tubulin isofom composition, TOP2A and MVP expression and activity of transport pumps (ABCB1, ABCG2. Cytogenetics showed less chromosomes but more structural aberrations in the resistant cells. CONCLUSION: We surpassed previous studies by parallel developing a massive number of cell lines to investigate chemoresistance. While the heterogeneity caused evolution of multiple resistant clones with different resistance characteristics, the activation of only a few mechanisms were sufficient in one cell line to achieve resistance.

  9. The Voltage-Gated Potassium Channel Subfamily KQT Member 4 (KCNQ4) Displays Parallel Evolution in Echolocating Bats

    Science.gov (United States)

    Liu, Yang; Han, Naijian; Franchini, Lucía F.; Xu, Huihui; Pisciottano, Francisco; Elgoyhen, Ana Belén; Rajan, Koilmani Emmanuvel; Zhang, Shuyi

    2012-01-01

    Bats are the only mammals that use highly developed laryngeal echolocation, a sensory mechanism based on the ability to emit laryngeal sounds and interpret the returning echoes to identify objects. Although this capability allows bats to orientate and hunt in complete darkness, endowing them with great survival advantages, the genetic bases underlying the evolution of bat echolocation are still largely unknown. Echolocation requires high-frequency hearing that in mammals is largely dependent on somatic electromotility of outer hair cells. Then, understanding the molecular evolution of outer hair cell genes might help to unravel the evolutionary history of echolocation. In this work, we analyzed the molecular evolution of two key outer hair cell genes: the voltage-gated potassium channel gene KCNQ4 and CHRNA10, the gene encoding the α10 nicotinic acetylcholine receptor subunit. We reconstructed the phylogeny of bats based on KCNQ4 and CHRNA10 protein and nucleotide sequences. A phylogenetic tree built using KCNQ4 amino acid sequences showed that two paraphyletic clades of laryngeal echolocating bats grouped together, with eight shared substitutions among particular lineages. In addition, our analyses indicated that two of these parallel substitutions, M388I and P406S, were probably fixed under positive selection and could have had a strong functional impact on KCNQ4. Moreover, our results indicated that KCNQ4 evolved under positive selection in the ancestral lineage leading to mammals, suggesting that this gene might have been important for the evolution of mammalian hearing. On the other hand, we found that CHRNA10, a gene that evolved adaptively in the mammalian lineage, was under strong purifying selection in bats. Thus, the CHRNA10 amino acid tree did not show echolocating bat monophyly and reproduced the bat species tree. These results suggest that only a subset of hearing genes could underlie the evolution of echolocation. The present work continues to

  10. Parallel evolution of tetrodotoxin resistance in three voltage-gated sodium channel genes in the garter snake Thamnophis sirtalis.

    Science.gov (United States)

    McGlothlin, Joel W; Chuckalovcak, John P; Janes, Daniel E; Edwards, Scott V; Feldman, Chris R; Brodie, Edmund D; Pfrender, Michael E; Brodie, Edmund D

    2014-11-01

    Members of a gene family expressed in a single species often experience common selection pressures. Consequently, the molecular basis of complex adaptations may be expected to involve parallel evolutionary changes in multiple paralogs. Here, we use bacterial artificial chromosome library scans to investigate the evolution of the voltage-gated sodium channel (Nav) family in the garter snake Thamnophis sirtalis, a predator of highly toxic Taricha newts. Newts possess tetrodotoxin (TTX), which blocks Nav's, arresting action potentials in nerves and muscle. Some Thamnophis populations have evolved resistance to extremely high levels of TTX. Previous work has identified amino acid sites in the skeletal muscle sodium channel Nav1.4 that confer resistance to TTX and vary across populations. We identify parallel evolution of TTX resistance in two additional Nav paralogs, Nav1.6 and 1.7, which are known to be expressed in the peripheral nervous system and should thus be exposed to ingested TTX. Each paralog contains at least one TTX-resistant substitution identical to a substitution previously identified in Nav1.4. These sites are fixed across populations, suggesting that the resistant peripheral nerves antedate resistant muscle. In contrast, three sodium channels expressed solely in the central nervous system (Nav1.1-1.3) showed no evidence of TTX resistance, consistent with protection from toxins by the blood-brain barrier. We also report the exon-intron structure of six Nav paralogs, the first such analysis for snake genes. Our results demonstrate that the molecular basis of adaptation may be both repeatable across members of a gene family and predictable based on functional considerations. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  11. Battery parameterisation based on differential evolution via a boundary evolution strategy

    Science.gov (United States)

    Yang, Guangya

    2014-01-01

    Attention has been given to the battery modelling in the electric engineering field following the current development of renewable energy and electrification of transportation. The establishment of the equivalent circuit model of the battery requires data preparation and parameterisation. Besides, as the equivalent circuit model is an abstract map of the battery electric characteristics, the determination of the possible ranges of parameters can be a challenging task. In this paper, an efficient yet easy to implement method is proposed to parameterise the equivalent circuit model of batteries utilising the advances of evolutionary algorithms (EAs). Differential evolution (DE) is selected and modified to parameterise an equivalent circuit model of lithium-ion batteries. A boundary evolution strategy (BES) is developed and incorporated into the DE to update the parameter boundaries during the parameterisation. The method can parameterise the model without extensive data preparation. In addition, the approach can also estimate the initial SOC and the available capacity. The efficiency of the approach is verified through two battery packs, one is an 8-cell battery module and one from an electrical vehicle.

  12. Molecular and morphological systematics of the Ellisellidae (Coelenterata: Octocorallia): Parallel evolution in a globally distributed family of octocorals

    KAUST Repository

    Bilewitch, Jaret P.

    2014-04-01

    The octocorals of the Ellisellidae constitute a diverse and widely distributed family with subdivisions into genera based on colonial growth forms. Branching patterns are repeated in several genera and congeners often display region-specific variations in a given growth form. We examined the systematic patterns of ellisellid genera and the evolution of branching form diversity using molecular phylogenetic and ancestral morphological reconstructions. Six of eight included genera were found to be polyphyletic due to biogeographical incompatibility with current taxonomic assignments and the creation of at least six new genera plus several reassignments among existing genera is necessary. Phylogenetic patterns of diversification of colony branching morphology displayed a similar transformation order in each of the two primary ellisellid clades, with a sea fan form estimated as the most-probable common ancestor with likely origins in the Indo-Pacific region. The observed parallelism in evolution indicates the existence of a constraint on the genetic elements determining ellisellid colonial morphology. However, the lack of correspondence between levels of genetic divergence and morphological diversity among genera suggests that future octocoral studies should focus on the role of changes in gene regulation in the evolution of branching patterns. © 2014 Elsevier Inc.

  13. Molecular and morphological systematics of the Ellisellidae (Coelenterata: Octocorallia): Parallel evolution in a globally distributed family of octocorals

    KAUST Repository

    Bilewitch, Jaret P.; Ekins, Merrick; Hooper, John; Degnan, Sandie M.

    2014-01-01

    The octocorals of the Ellisellidae constitute a diverse and widely distributed family with subdivisions into genera based on colonial growth forms. Branching patterns are repeated in several genera and congeners often display region-specific variations in a given growth form. We examined the systematic patterns of ellisellid genera and the evolution of branching form diversity using molecular phylogenetic and ancestral morphological reconstructions. Six of eight included genera were found to be polyphyletic due to biogeographical incompatibility with current taxonomic assignments and the creation of at least six new genera plus several reassignments among existing genera is necessary. Phylogenetic patterns of diversification of colony branching morphology displayed a similar transformation order in each of the two primary ellisellid clades, with a sea fan form estimated as the most-probable common ancestor with likely origins in the Indo-Pacific region. The observed parallelism in evolution indicates the existence of a constraint on the genetic elements determining ellisellid colonial morphology. However, the lack of correspondence between levels of genetic divergence and morphological diversity among genera suggests that future octocoral studies should focus on the role of changes in gene regulation in the evolution of branching patterns. © 2014 Elsevier Inc.

  14. Static and dynamic load-balancing strategies for parallel reservoir simulation

    International Nuclear Information System (INIS)

    Anguille, L.; Killough, J.E.; Li, T.M.C.; Toepfer, J.L.

    1995-01-01

    Accurate simulation of the complex phenomena that occur in flow in porous media can tax even the most powerful serial computers. Emergence of new parallel computer architectures as a future efficient tool in reservoir simulation may overcome this difficulty. Unfortunately, major problems remain to be solved before using parallel computers commercially: production serial programs must be rewritten to be efficient in parallel environments and load balancing methods must be explored to evenly distribute the workload on each processor during the simulation. This study implements both a static load-balancing algorithm and a receiver-initiated dynamic load-sharing algorithm to achieve high parallel efficiencies on both the IBM SP2 and Intel IPSC/860 parallel computers. Significant speedup improvement was recorded for both methods. Further optimization of these algorithms yielded a technique with efficiencies as high as 90% and 70% on 8 and 32 nodes, respectively. The increased performance was the result of the minimization of message-passing overhead

  15. Advanced parallel strategy for strongly coupled fast transient fluid-structure dynamics with dual management of kinematic constraints

    International Nuclear Information System (INIS)

    Faucher, Vincent

    2014-01-01

    Simulating fast transient phenomena involving fluids and structures in interaction for safety purposes requires both accurate and robust algorithms, and parallel computing to reduce the calculation time for industrial models. Managing kinematic constraints linking fluid and structural entities is thus a key issue and this contribution promotes a dual approach over the classical penalty approach, introducing arbitrary coefficients in the solution. This choice however severely increases the complexity of the problem, mainly due to non-permanent kinematic constraints. An innovative parallel strategy is therefore described, whose performances are demonstrated on significant examples exhibiting the full complexity of the target industrial simulations. (authors)

  16. Pteros 2.0: Evolution of the fast parallel molecular analysis library for C++ and python.

    Science.gov (United States)

    Yesylevskyy, Semen O

    2015-07-15

    Pteros is the high-performance open-source library for molecular modeling and analysis of molecular dynamics trajectories. Starting from version 2.0 Pteros is available for C++ and Python programming languages with very similar interfaces. This makes it suitable for writing complex reusable programs in C++ and simple interactive scripts in Python alike. New version improves the facilities for asynchronous trajectory reading and parallel execution of analysis tasks by introducing analysis plugins which could be written in either C++ or Python in completely uniform way. The high level of abstraction provided by analysis plugins greatly simplifies prototyping and implementation of complex analysis algorithms. Pteros is available for free under Artistic License from http://sourceforge.net/projects/pteros/. © 2015 Wiley Periodicals, Inc.

  17. Parallel and convergent evolution of the dim-light vision gene RH1 in bats (Order: Chiroptera).

    Science.gov (United States)

    Shen, Yong-Yi; Liu, Jie; Irwin, David M; Zhang, Ya-Ping

    2010-01-21

    Rhodopsin, encoded by the gene Rhodopsin (RH1), is extremely sensitive to light, and is responsible for dim-light vision. Bats are nocturnal mammals that inhabit poor light environments. Megabats (Old-World fruit bats) generally have well-developed eyes, while microbats (insectivorous bats) have developed echolocation and in general their eyes were degraded, however, dramatic differences in the eyes, and their reliance on vision, exist in this group. In this study, we examined the rod opsin gene (RH1), and compared its evolution to that of two cone opsin genes (SWS1 and M/LWS). While phylogenetic reconstruction with the cone opsin genes SWS1 and M/LWS generated a species tree in accord with expectations, the RH1 gene tree united Pteropodidae (Old-World fruit bats) and Yangochiroptera, with very high bootstrap values, suggesting the possibility of convergent evolution. The hypothesis of convergent evolution was further supported when nonsynonymous sites or amino acid sequences were used to construct phylogenies. Reconstructed RH1 sequences at internal nodes of the bat species phylogeny showed that: (1) Old-World fruit bats share an amino acid change (S270G) with the tomb bat; (2) Miniopterus share two amino acid changes (V104I, M183L) with Rhinolophoidea; (3) the amino acid replacement I123V occurred independently on four branches, and the replacements L99M, L266V and I286V occurred each on two branches. The multiple parallel amino acid replacements that occurred in the evolution of bat RH1 suggest the possibility of multiple convergences of their ecological specialization (i.e., various photic environments) during adaptation for the nocturnal lifestyle, and suggest that further attention is needed on the study of the ecology and behavior of bats.

  18. Evolution of symmetric reconnection layer in the presence of parallel shear flow

    Energy Technology Data Exchange (ETDEWEB)

    Lu Haoyu [Space Science Institute, School of Astronautics, Beihang University, Beijing 100191 (China); Sate Key Laboratory of Space Weather, Chinese Academy of Sciences, Beijing 100190 (China); Cao Jinbin [Space Science Institute, School of Astronautics, Beihang University, Beijing 100191 (China)

    2011-07-15

    The development of the structure of symmetric reconnection layer in the presence of a shear flow parallel to the antiparallel magnetic field component is studied by using a set of one-dimensional (1D) magnetohydrodynamic (MHD) equations. The Riemann problem is simulated through a second-order conservative TVD (total variation diminishing) scheme, in conjunction with Roe's averages for the Riemann problem. The simulation results indicate that besides the MHD shocks and expansion waves, there exist some new small-scale structures in the reconnection layer. For the case of zero initial guide magnetic field (i.e., B{sub y0} = 0), a pair of intermediate shock and slow shock (SS) is formed in the presence of the parallel shear flow. The critical velocity of initial shear flow V{sub zc} is just the Alfven velocity in the inflow region. As V{sub z{infinity}} increases to the value larger than V{sub zc}, a new slow expansion wave appears in the position of SS in the case V{sub z{infinity}} < V{sub zc}, and one of the current densities drops to zero. As plasma {beta} increases, the out-flow region is widened. For B{sub y0} {ne} 0, a pair of SSs and an additional pair of time-dependent intermediate shocks (TDISs) are found to be present. Similar to the case of B{sub y0} = 0, there exists a critical velocity of initial shear flow V{sub zc}. The value of V{sub zc} is, however, smaller than the Alfven velocity of the inflow region. As plasma {beta} increases, the velocities of SS and TDIS increase, and the out-flow region is widened. However, the velocity of downstream SS increases even faster, making the distance between SS and TDIS smaller. Consequently, the interaction between SS and TDIS in the case of high plasma {beta} influences the property of direction rotation of magnetic field across TDIS. Thereby, a wedge in the hodogram of tangential magnetic field comes into being. When {beta}{yields}{infinity}, TDISs disappear and the guide magnetic field becomes constant.

  19. Rapid and Parallel Adaptive Evolution of the Visual System of Neotropical Midas Cichlid Fishes.

    Science.gov (United States)

    Torres-Dowdall, Julián; Pierotti, Michele E R; Härer, Andreas; Karagic, Nidal; Woltering, Joost M; Henning, Frederico; Elmer, Kathryn R; Meyer, Axel

    2017-10-01

    Midas cichlid fish are a Central American species flock containing 13 described species that has been dated to only a few thousand years old, a historical timescale infrequently associated with speciation. Their radiation involved the colonization of several clear water crater lakes from two turbid great lakes. Therefore, Midas cichlids have been subjected to widely varying photic conditions during their radiation. Being a primary signal relay for information from the environment to the organism, the visual system is under continuing selective pressure and a prime organ system for accumulating adaptive changes during speciation, particularly in the case of dramatic shifts in photic conditions. Here, we characterize the full visual system of Midas cichlids at organismal and genetic levels, to determine what types of adaptive changes evolved within the short time span of their radiation. We show that Midas cichlids have a diverse visual system with unexpectedly high intra- and interspecific variation in color vision sensitivity and lens transmittance. Midas cichlid populations in the clear crater lakes have convergently evolved visual sensitivities shifted toward shorter wavelengths compared with the ancestral populations from the turbid great lakes. This divergence in sensitivity is driven by changes in chromophore usage, differential opsin expression, opsin coexpression, and to a lesser degree by opsin coding sequence variation. The visual system of Midas cichlids has the evolutionary capacity to rapidly integrate multiple adaptations to changing light environments. Our data may indicate that, in early stages of divergence, changes in opsin regulation could precede changes in opsin coding sequence evolution. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Parallel altitudinal clines reveal trends in adaptive evolution of genome size in Zea mays

    Science.gov (United States)

    Berg, Jeremy J.; Birchler, James A.; Grote, Mark N.; Lorant, Anne; Quezada, Juvenal

    2018-01-01

    While the vast majority of genome size variation in plants is due to differences in repetitive sequence, we know little about how selection acts on repeat content in natural populations. Here we investigate parallel changes in intraspecific genome size and repeat content of domesticated maize (Zea mays) landraces and their wild relative teosinte across altitudinal gradients in Mesoamerica and South America. We combine genotyping, low coverage whole-genome sequence data, and flow cytometry to test for evidence of selection on genome size and individual repeat abundance. We find that population structure alone cannot explain the observed variation, implying that clinal patterns of genome size are maintained by natural selection. Our modeling additionally provides evidence of selection on individual heterochromatic knob repeats, likely due to their large individual contribution to genome size. To better understand the phenotypes driving selection on genome size, we conducted a growth chamber experiment using a population of highland teosinte exhibiting extensive variation in genome size. We find weak support for a positive correlation between genome size and cell size, but stronger support for a negative correlation between genome size and the rate of cell production. Reanalyzing published data of cell counts in maize shoot apical meristems, we then identify a negative correlation between cell production rate and flowering time. Together, our data suggest a model in which variation in genome size is driven by natural selection on flowering time across altitudinal clines, connecting intraspecific variation in repetitive sequence to important differences in adaptive phenotypes. PMID:29746459

  1. Rapid sequencing of the bamboo mitochondrial genome using Illumina technology and parallel episodic evolution of organelle genomes in grasses.

    Science.gov (United States)

    Ma, Peng-Fei; Guo, Zhen-Hua; Li, De-Zhu

    2012-01-01

    Compared to their counterparts in animals, the mitochondrial (mt) genomes of angiosperms exhibit a number of unique features. However, unravelling their evolution is hindered by the few completed genomes, of which are essentially Sanger sequenced. While next-generation sequencing technologies have revolutionized chloroplast genome sequencing, they are just beginning to be applied to angiosperm mt genomes. Chloroplast genomes of grasses (Poaceae) have undergone episodic evolution and the evolutionary rate was suggested to be correlated between chloroplast and mt genomes in Poaceae. It is interesting to investigate whether correlated rate change also occurred in grass mt genomes as expected under lineage effects. A time-calibrated phylogenetic tree is needed to examine rate change. We determined a largely completed mt genome from a bamboo, Ferrocalamus rimosivaginus (Poaceae), through Illumina sequencing of total DNA. With combination of de novo and reference-guided assembly, 39.5-fold coverage Illumina reads were finally assembled into scaffolds totalling 432,839 bp. The assembled genome contains nearly the same genes as the completed mt genomes in Poaceae. For examining evolutionary rate in grass mt genomes, we reconstructed a phylogenetic tree including 22 taxa based on 31 mt genes. The topology of the well-resolved tree was almost identical to that inferred from chloroplast genome with only minor difference. The inconsistency possibly derived from long branch attraction in mtDNA tree. By calculating absolute substitution rates, we found significant rate change (∼4-fold) in mt genome before and after the diversification of Poaceae both in synonymous and nonsynonymous terms. Furthermore, the rate change was correlated with that of chloroplast genomes in grasses. Our result demonstrates that it is a rapid and efficient approach to obtain angiosperm mt genome sequences using Illumina sequencing technology. The parallel episodic evolution of mt and chloroplast

  2. New strategy for eliminating zero-sequence circulating current between parallel operating three-level NPC voltage source inverters

    DEFF Research Database (Denmark)

    Li, Kai; Dong, Zhenhua; Wang, Xiaodong

    2018-01-01

    A novel strategy based on a zero common mode voltage pulse-width modulation (ZCMV-PWM) technique and zero-sequence circulating current (ZSCC) feedback control is proposed in this study to eliminate ZSCCs between three-level neutral point clamped (NPC) voltage source inverters, with common AC and DC......, the ZCMV-PWM method is presented to reduce CMVs, and a simple electric circuit is adopted to control ZSCCs and neutral point potential. Finally, simulation and experiment are conducted to illustrate effectiveness of the proposed strategy. Results show that ZSCCs between paralleled inverters can...

  3. Design and implementation of parallel video encoding strategies using divisible load analysis

    NARCIS (Netherlands)

    Li, Ping; Veeravalli, Bharadwaj; Kassim, A.A.

    2005-01-01

    The processing time needed for motion estimation usually accounts for a significant part of the overall processing time of the video encoder. To improve the video encoding speed, reducing the execution time for motion estimation process is essential. Parallel implementation of video encoding systems

  4. Evolution of quantum and classical strategies on networks by group interactions

    International Nuclear Information System (INIS)

    Li Qiang; Chen Minyou; Iqbal, Azhar; Abbott, Derek

    2012-01-01

    In this paper, quantum strategies are introduced within evolutionary games in order to investigate the evolution of quantum and classical strategies on networks in the public goods game. Comparing the results of evolution on a scale-free network and a square lattice, we find that a quantum strategy outperforms the classical strategies, regardless of the network. Moreover, a quantum strategy dominates the population earlier in group interactions than it does in pairwise interactions. In particular, if the hub node in a scale-free network is occupied by a cooperator initially, the strategy of cooperation will prevail in the population. However, in other situations, a quantum strategy can defeat the classical ones and finally becomes the dominant strategy in the population. (paper)

  5. Biodiversity Meets Neuroscience: From the Sequencing Ship (Ship-Seq) to Deciphering Parallel Evolution of Neural Systems in Omic's Era.

    Science.gov (United States)

    Moroz, Leonid L

    2015-12-01

    The origins of neural systems and centralized brains are one of the major transitions in evolution. These events might occur more than once over 570-600 million years. The convergent evolution of neural circuits is evident from a diversity of unique adaptive strategies implemented by ctenophores, cnidarians, acoels, molluscs, and basal deuterostomes. But, further integration of biodiversity research and neuroscience is required to decipher critical events leading to development of complex integrative and cognitive functions. Here, we outline reference species and interdisciplinary approaches in reconstructing the evolution of nervous systems. In the "omic" era, it is now possible to establish fully functional genomics laboratories aboard of oceanic ships and perform sequencing and real-time analyses of data at any oceanic location (named here as Ship-Seq). In doing so, fragile, rare, cryptic, and planktonic organisms, or even entire marine ecosystems, are becoming accessible directly to experimental and physiological analyses by modern analytical tools. Thus, we are now in a position to take full advantages from countless "experiments" Nature performed for us in the course of 3.5 billion years of biological evolution. Together with progress in computational and comparative genomics, evolutionary neuroscience, proteomic and developmental biology, a new surprising picture is emerging that reveals many ways of how nervous systems evolved. As a result, this symposium provides a unique opportunity to revisit old questions about the origins of biological complexity. © The Author 2015. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  6. Parsing parallel evolution: ecological divergence and differential gene expression in the adaptive radiations of thick-lipped Midas cichlid fishes from Nicaragua.

    Science.gov (United States)

    Manousaki, Tereza; Hull, Pincelli M; Kusche, Henrik; Machado-Schiaffino, Gonzalo; Franchini, Paolo; Harrod, Chris; Elmer, Kathryn R; Meyer, Axel

    2013-02-01

    The study of parallel evolution facilitates the discovery of common rules of diversification. Here, we examine the repeated evolution of thick lips in Midas cichlid fishes (the Amphilophus citrinellus species complex)-from two Great Lakes and two crater lakes in Nicaragua-to assess whether similar changes in ecology, phenotypic trophic traits and gene expression accompany parallel trait evolution. Using next-generation sequencing technology, we characterize transcriptome-wide differential gene expression in the lips of wild-caught sympatric thick- and thin-lipped cichlids from all four instances of repeated thick-lip evolution. Six genes (apolipoprotein D, myelin-associated glycoprotein precursor, four-and-a-half LIM domain protein 2, calpain-9, GTPase IMAP family member 8-like and one hypothetical protein) are significantly underexpressed in the thick-lipped morph across all four lakes. However, other aspects of lips' gene expression in sympatric morphs differ in a lake-specific pattern, including the magnitude of differentially expressed genes (97-510). Generally, fewer genes are differentially expressed among morphs in the younger crater lakes than in those from the older Great Lakes. Body shape, lower pharyngeal jaw size and shape, and stable isotopes (δ(13)C and δ(15)N) differ between all sympatric morphs, with the greatest differentiation in the Great Lake Nicaragua. Some ecological traits evolve in parallel (those related to foraging ecology; e.g. lip size, body and head shape) but others, somewhat surprisingly, do not (those related to diet and food processing; e.g. jaw size and shape, stable isotopes). Taken together, this case of parallelism among thick- and thin-lipped cichlids shows a mosaic pattern of parallel and nonparallel evolution. © 2012 Blackwell Publishing Ltd.

  7. Improvement of remote monitoring on water quality in a subtropical reservoir by incorporating grammatical evolution with parallel genetic algorithms into satellite imagery.

    Science.gov (United States)

    Chen, Li; Tan, Chih-Hung; Kao, Shuh-Ji; Wang, Tai-Sheng

    2008-01-01

    Parallel GEGA was constructed by incorporating grammatical evolution (GE) into the parallel genetic algorithm (GA) to improve reservoir water quality monitoring based on remote sensing images. A cruise was conducted to ground-truth chlorophyll-a (Chl-a) concentration longitudinally along the Feitsui Reservoir, the primary water supply for Taipei City in Taiwan. Empirical functions with multiple spectral parameters from the Landsat 7 Enhanced Thematic Mapper (ETM+) data were constructed. The GE, an evolutionary automatic programming type system, automatically discovers complex nonlinear mathematical relationships among observed Chl-a concentrations and remote-sensed imageries. A GA was used afterward with GE to optimize the appropriate function type. Various parallel subpopulations were processed to enhance search efficiency during the optimization procedure with GA. Compared with a traditional linear multiple regression (LMR), the performance of parallel GEGA was found to be better than that of the traditional LMR model with lower estimating errors.

  8. The evolution of concepts of vestibular peripheral information processing: toward the dynamic, adaptive, parallel processing macular model

    Science.gov (United States)

    Ross, Muriel D.

    2003-01-01

    In a letter to Robert Hooke, written on 5 February, 1675, Isaac Newton wrote "If I have seen further than certain other men it is by standing upon the shoulders of giants." In his context, Newton was referring to the work of Galileo and Kepler, who preceded him. However, every field has its own giants, those men and women who went before us and, often with few tools at their disposal, uncovered the facts that enabled later researchers to advance knowledge in a particular area. This review traces the history of the evolution of views from early giants in the field of vestibular research to modern concepts of vestibular organ organization and function. Emphasis will be placed on the mammalian maculae as peripheral processors of linear accelerations acting on the head. This review shows that early, correct findings were sometimes unfortunately disregarded, impeding later investigations into the structure and function of the vestibular organs. The central themes are that the macular organs are highly complex, dynamic, adaptive, distributed parallel processors of information, and that historical references can help us to understand our own place in advancing knowledge about their complicated structure and functions.

  9. Modern spandrels: the roles of genetic drift, gene flow and natural selection in the evolution of parallel clines.

    Science.gov (United States)

    Santangelo, James S; Johnson, Marc T J; Ness, Rob W

    2018-05-16

    Urban environments offer the opportunity to study the role of adaptive and non-adaptive evolutionary processes on an unprecedented scale. While the presence of parallel clines in heritable phenotypic traits is often considered strong evidence for the role of natural selection, non-adaptive evolutionary processes can also generate clines, and this may be more likely when traits have a non-additive genetic basis due to epistasis. In this paper, we use spatially explicit simulations modelled according to the cyanogenesis (hydrogen cyanide, HCN) polymorphism in white clover ( Trifolium repens ) to examine the formation of phenotypic clines along urbanization gradients under varying levels of drift, gene flow and selection. HCN results from an epistatic interaction between two Mendelian-inherited loci. Our results demonstrate that the genetic architecture of this trait makes natural populations susceptible to decreases in HCN frequencies via drift. Gradients in the strength of drift across a landscape resulted in phenotypic clines with lower frequencies of HCN in strongly drifting populations, giving the misleading appearance of deterministic adaptive changes in the phenotype. Studies of heritable phenotypic change in urban populations should generate null models of phenotypic evolution based on the genetic architecture underlying focal traits prior to invoking selection's role in generating adaptive differentiation. © 2018 The Author(s).

  10. Extension parallel to the rift zone during segmented fault growth: application to the evolution of the NE Atlantic

    Directory of Open Access Journals (Sweden)

    A. Bubeck

    2017-11-01

    Full Text Available The mechanical interaction of propagating normal faults is known to influence the linkage geometry of first-order faults, and the development of second-order faults and fractures, which transfer displacement within relay zones. Here we use natural examples of growth faults from two active volcanic rift zones (Koa`e, island of Hawai`i, and Krafla, northern Iceland to illustrate the importance of horizontal-plane extension (heave gradients, and associated vertical axis rotations, in evolving continental rift systems. Second-order extension and extensional-shear faults within the relay zones variably resolve components of regional extension, and components of extension and/or shortening parallel to the rift zone, to accommodate the inherently three-dimensional (3-D strains associated with relay zone development and rotation. Such a configuration involves volume increase, which is accommodated at the surface by open fractures; in the subsurface this may be accommodated by veins or dikes oriented obliquely and normal to the rift axis. To consider the scalability of the effects of relay zone rotations, we compare the geometry and kinematics of fault and fracture sets in the Koa`e and Krafla rift zones with data from exhumed contemporaneous fault and dike systems developed within a > 5×104 km2 relay system that developed during formation of the NE Atlantic margins. Based on the findings presented here we propose a new conceptual model for the evolution of segmented continental rift basins on the NE Atlantic margins.

  11. Increased performance in the short-term water demand forecasting through the use of a parallel adaptive weighting strategy

    Science.gov (United States)

    Sardinha-Lourenço, A.; Andrade-Campos, A.; Antunes, A.; Oliveira, M. S.

    2018-03-01

    Recent research on water demand short-term forecasting has shown that models using univariate time series based on historical data are useful and can be combined with other prediction methods to reduce errors. The behavior of water demands in drinking water distribution networks focuses on their repetitive nature and, under meteorological conditions and similar consumers, allows the development of a heuristic forecast model that, in turn, combined with other autoregressive models, can provide reliable forecasts. In this study, a parallel adaptive weighting strategy of water consumption forecast for the next 24-48 h, using univariate time series of potable water consumption, is proposed. Two Portuguese potable water distribution networks are used as case studies where the only input data are the consumption of water and the national calendar. For the development of the strategy, the Autoregressive Integrated Moving Average (ARIMA) method and a short-term forecast heuristic algorithm are used. Simulations with the model showed that, when using a parallel adaptive weighting strategy, the prediction error can be reduced by 15.96% and the average error by 9.20%. This reduction is important in the control and management of water supply systems. The proposed methodology can be extended to other forecast methods, especially when it comes to the availability of multiple forecast models.

  12. Hybrid parallel strategy for the simulation of fast transient accidental situations at reactor scale

    International Nuclear Information System (INIS)

    Faucher, V.; Galon, P.; Beccantini, A.; Crouzet, F.; Debaud, F.; Gautier, T.

    2015-01-01

    Highlights: • Reference accidental situations for current and future reactors are considered. • They require the modeling of complex fluid–structure systems at full reactor scale. • EPX software computes the non-linear transient solution with explicit time stepping. • Focus on the parallel hybrid solver specific to the proposed coupled equations. - Abstract: This contribution is dedicated to the latest methodological developments implemented in the fast transient dynamics software EUROPLEXUS (EPX) to simulate the mechanical response of fully coupled fluid–structure systems to accidental situations to be considered at reactor scale, among which the Loss of Coolant Accident, the Core Disruptive Accident and the Hydrogen Explosion. Time integration is explicit and the search for reference solutions within the safety framework prevents any simplification and approximations in the coupled algorithm: for instance, all kinematic constraints are dealt with using Lagrange Multipliers, yielding a complex flow chart when non-permanent constraints such as unilateral contact or immersed fluid–structure boundaries are considered. The parallel acceleration of the solution process is then achieved through a hybrid approach, based on a weighted domain decomposition for distributed memory computing and the use of the KAAPI library for self-balanced shared memory processing inside subdomains

  13. Evolution of strategies to improve preclinical cardiac safety testing.

    Science.gov (United States)

    Gintant, Gary; Sager, Philip T; Stockbridge, Norman

    2016-07-01

    The early and efficient assessment of cardiac safety liabilities is essential to confidently advance novel drug candidates. This article discusses evolving mechanistically based preclinical strategies for detecting drug-induced electrophysiological and structural cardiotoxicity using in vitro human ion channel assays, human-based in silico reconstructions and human stem cell-derived cardiomyocytes. These strategies represent a paradigm shift from current approaches, which rely on simplistic in vitro assays that measure blockade of the Kv11.1 current (also known as the hERG current or IKr) and on the use of non-human cells or tissues. These new strategies have the potential to improve sensitivity and specificity in the early detection of genuine cardiotoxicity risks, thereby reducing the likelihood of mistakenly discarding viable drug candidates and speeding the progression of worthy drugs into clinical trials.

  14. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    Science.gov (United States)

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  15. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy.

    Science.gov (United States)

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic-there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions.

  16. Hybrid parallel strategy for the simulation of fast transient accidental situations at reactor scale

    International Nuclear Information System (INIS)

    Faucher, V.; Galon, P.; Beccantini, A.; Crouzet, F.; Debaud, F.; Gautier, T.

    2013-01-01

    This contribution is dedicated to the latest methodological developments implemented in the fast transient dynamics software EUROPLEXUS (EPX) to simulate the mechanical response of fully coupled fluid-structure systems to accidental situations to be considered at reactor scale, among which the Loss of Coolant Accident, the Core Disruptive Accident and the Hydrogen Explosion. Time integration is explicit and the search for reference solutions within the safety framework prevents any simplification and approximations in the coupled algorithm: for instance, all kinematic constraints are dealt with using Lagrange Multipliers, yielding a complex flow chart when non-permanent constraints such as unilateral contact or immersed fluid-structure boundaries are considered. The parallel acceleration of the solution process is then achieved through a hybrid approach, based on a weighted domain decomposition for distributed memory computing and the use of the KAAPI library for self-balanced shared memory processing inside sub-domains. (authors)

  17. A Parallel Strategy for High-speed Interpolation of CNC Using Data Space Constraint Method

    Directory of Open Access Journals (Sweden)

    Shuan-qiang Yang

    2013-12-01

    Full Text Available A high-speed interpolation scheme using parallel computing is proposed in this paper. The interpolation method is divided into two tasks, namely, the rough task executing in PC and the fine task in the I/O card. During the interpolation procedure, the double buffers are constructed to exchange the interpolation data between the two tasks. Then, the data space constraint method is adapted to ensure the reliable and continuous data communication between the two buffers. Therefore, the proposed scheme can be realized in the common distribution of the operation systems without real-time performance. The high-speed and high-precision motion control can be achieved as well. Finally, an experiment is conducted on the self-developed CNC platform, the test results are shown to verify the proposed method.

  18. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    Science.gov (United States)

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  19. The evolution of Islamic State's strategy | Solomon | Scientia Militaria ...

    African Journals Online (AJOL)

    With these funds, IS has deployed soft power – digging sewage systems and providing stipends to families – to earn the loyalty of its 'citizens'. IS has also displayed superior military strategy combining conventional military doctrine with asymmetric warfare. As IS are confronted with superior conventional forces in their ...

  20. Evolution of complex asexual reproductive strategies in jellyfish

    DEFF Research Database (Denmark)

    Schnedler-Meyer, Nicolas Azaña; Pigolotti, Simone; Mariani, Patrizio

    2018-01-01

    Many living organisms in terrestrial and aquatic ecosystems rely on multiple reproductive strategies to reduce the risk of extinction in variable environments. Examples are provided by the polyp stage of several bloom-forming jellyfish species, which can reproduce asexually using different buddin...

  1. Agent Based Simulation of Group Emotions Evolution and Strategy Intervention in Extreme Events

    Directory of Open Access Journals (Sweden)

    Bo Li

    2014-01-01

    Full Text Available Agent based simulation method has become a prominent approach in computational modeling and analysis of public emergency management in social science research. The group emotions evolution, information diffusion, and collective behavior selection make extreme incidents studies a complex system problem, which requires new methods for incidents management and strategy evaluation. This paper studies the group emotion evolution and intervention strategy effectiveness using agent based simulation method. By employing a computational experimentation methodology, we construct the group emotion evolution as a complex system and test the effects of three strategies. In addition, the events-chain model is proposed to model the accumulation influence of the temporal successive events. Each strategy is examined through three simulation experiments, including two make-up scenarios and a real case study. We show how various strategies could impact the group emotion evolution in terms of the complex emergence and emotion accumulation influence in extreme events. This paper also provides an effective method of how to use agent-based simulation for the study of complex collective behavior evolution problem in extreme incidents, emergency, and security study domains.

  2. A commutation strategy for IGBT-based CSI-fed parallel resonant

    Indian Academy of Sciences (India)

    The dynamic behaviour of the switches decides the upper frequency limit for the application. IGBTs with the series diodes behave as uni-directional current switches with bi-directional voltage blocking capability. This feature should be taken into account to decide on an appropriate switching strategy for this converter ...

  3. Special Issues on Learning Strategies: Parallels and Contrasts between Australian and Chinese Tertiary Education

    Science.gov (United States)

    Yao, Yuzuo

    2017-01-01

    Learning strategies are crucial to student learning in higher education. In this paper, there are comparisons of student engagement, feedback mechanism and workload arrangements at some typical universities in Australia and China, which are followed by practical suggestions for active learning. First, an inclusive class would allow learners from…

  4. Reliability optimization of series-parallel systems with a choice of redundancy strategies using a genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tavakkoli-Moghaddam, R. [Department of Industrial Engineering, Faculty of Engineering, University of Tehran, P.O. Box 11365/4563, Tehran (Iran, Islamic Republic of); Department of Mechanical Engineering, The University of British Columbia, Vancouver (Canada)], E-mail: tavakoli@ut.ac.ir; Safari, J. [Department of Industrial Engineering, Science and Research Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of)], E-mail: jalalsafari@pideco.com; Sassani, F. [Department of Mechanical Engineering, The University of British Columbia, Vancouver (Canada)], E-mail: sassani@mech.ubc.ca

    2008-04-15

    This paper proposes a genetic algorithm (GA) for a redundancy allocation problem for the series-parallel system when the redundancy strategy can be chosen for individual subsystems. Majority of the solution methods for the general redundancy allocation problems assume that the redundancy strategy for each subsystem is predetermined and fixed. In general, active redundancy has received more attention in the past. However, in practice both active and cold-standby redundancies may be used within a particular system design and the choice of the redundancy strategy becomes an additional decision variable. Thus, the problem is to select the best redundancy strategy, component, and redundancy level for each subsystem in order to maximize the system reliability under system-level constraints. This belongs to the NP-hard class of problems. Due to its complexity, it is so difficult to optimally solve such a problem by using traditional optimization tools. It is demonstrated in this paper that GA is an efficient method for solving this type of problems. Finally, computational results for a typical scenario are presented and the robustness of the proposed algorithm is discussed.

  5. Reliability optimization of series-parallel systems with a choice of redundancy strategies using a genetic algorithm

    International Nuclear Information System (INIS)

    Tavakkoli-Moghaddam, R.; Safari, J.; Sassani, F.

    2008-01-01

    This paper proposes a genetic algorithm (GA) for a redundancy allocation problem for the series-parallel system when the redundancy strategy can be chosen for individual subsystems. Majority of the solution methods for the general redundancy allocation problems assume that the redundancy strategy for each subsystem is predetermined and fixed. In general, active redundancy has received more attention in the past. However, in practice both active and cold-standby redundancies may be used within a particular system design and the choice of the redundancy strategy becomes an additional decision variable. Thus, the problem is to select the best redundancy strategy, component, and redundancy level for each subsystem in order to maximize the system reliability under system-level constraints. This belongs to the NP-hard class of problems. Due to its complexity, it is so difficult to optimally solve such a problem by using traditional optimization tools. It is demonstrated in this paper that GA is an efficient method for solving this type of problems. Finally, computational results for a typical scenario are presented and the robustness of the proposed algorithm is discussed

  6. An evolution-based strategy for engineering allosteric regulation

    Science.gov (United States)

    Pincus, David; Resnekov, Orna; Reynolds, Kimberly A.

    2017-04-01

    Allosteric regulation provides a way to control protein activity at the time scale of milliseconds to seconds inside the cell. An ability to engineer synthetic allosteric systems would be of practical utility for the development of novel biosensors, creation of synthetic cell signaling pathways, and design of small molecule pharmaceuticals with regulatory impact. To this end, we outline a general approach—termed rational engineering of allostery at conserved hotspots (REACH)—to introduce novel regulation into a protein of interest by exploiting latent allostery that has been hard-wired by evolution into its structure. REACH entails the use of statistical coupling analysis (SCA) to identify ‘allosteric hotspots’ on protein surfaces, the development and implementation of experimental assays to test hotspots for functionality, and a toolkit of allosteric modulators to impinge on endogenous cellular circuitry. REACH can be broadly applied to rewire cellular processes to respond to novel inputs.

  7. Evolution in Clinical Knowledge Management Strategy at Intermountain Healthcare

    Science.gov (United States)

    Hulse, Nathan C.; Galland, Joel; Borsato, Emerson P.

    2012-01-01

    In this manuscript, we present an overview of the clinical knowledge management strategy at Intermountain Healthcare in support of our electronic medical record systems. Intermountain first initiated efforts in developing a centralized enterprise knowledge repository in 2001. Applications developed, areas of emphasis served, and key areas of focus are presented. We also detail historical and current areas of emphasis, in response to business needs. PMID:23304309

  8. Unusual loss of chymosin in mammalian lineages parallels neo-natal immune transfer strategies.

    Science.gov (United States)

    Lopes-Marques, Mónica; Ruivo, Raquel; Fonseca, Elza; Teixeira, Ana; Castro, L Filipe C

    2017-11-01

    Gene duplication and loss are powerful drivers of evolutionary change. The role of loss in phenotypic diversification is notably illustrated by the variable enzymatic repertoire involved in vertebrate protein digestion. Among these we find the pepsin family of aspartic proteinases, including chymosin (Cmy). Previous studies demonstrated that Cmy, a neo-natal digestive pepsin, is inactivated in some primates, including humans. This pseudogenization event was hypothesized to result from the acquisition of maternal immune immunoglobulin G (IgG) transfer. By investigating 94 mammalian subgenomes we reveal an unprecedented level of Cmy erosion in placental mammals, with numerous independent events of gene loss taking place in Primates, Dermoptera, Rodentia, Cetacea and Perissodactyla. Our findings strongly suggest that the recurrent inactivation of Cmy correlates with the evolution of the passive transfer of IgG and uncovers a noteworthy case of evolutionary cross-talk between the digestive and the immune system, modulated by gene loss. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Multi-objective reliability optimization of series-parallel systems with a choice of redundancy strategies

    International Nuclear Information System (INIS)

    Safari, Jalal

    2012-01-01

    This paper proposes a variant of the Non-dominated Sorting Genetic Algorithm (NSGA-II) to solve a novel mathematical model for multi-objective redundancy allocation problems (MORAP). Most researchers about redundancy allocation problem (RAP) have focused on single objective optimization, while there has been some limited research which addresses multi-objective optimization. Also all mathematical multi-objective models of general RAP assume that the type of redundancy strategy for each subsystem is predetermined and known a priori. In general, active redundancy has traditionally received greater attention; however, in practice both active and cold-standby redundancies may be used within a particular system design. The choice of redundancy strategy then becomes an additional decision variable. Thus, the proposed model and solution method are to select the best redundancy strategy, type of components, and levels of redundancy for each subsystem that maximizes the system reliability and minimize total system cost under system-level constraints. This problem belongs to the NP-hard class. This paper presents a second-generation Multiple-Objective Evolutionary Algorithm (MOEA), named NSGA-II to find the best solution for the given problem. The proposed algorithm demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker (DM) with a complete picture of the optimal solution space. After finding the Pareto front, a procedure is used to select the best solution from the Pareto front. Finally, the advantages of the presented multi-objective model and of the proposed algorithm are illustrated by solving test problems taken from the literature and the robustness of the proposed NSGA-II is discussed.

  10. The evolution of intellectual property strategy in innovation ecosystems

    DEFF Research Database (Denmark)

    Holgersson, Marcus; Granstrand, Ove; Bogers, Marcel

    2018-01-01

    In this article, we attempt to extend and nuance the debate on intellectual property (IP) strategy, appropriation, and open innovation in dynamic and systemic innovation contexts. We present the case of four generations of mobile telecommunications systems (covering the period 1980-2015), and des......In this article, we attempt to extend and nuance the debate on intellectual property (IP) strategy, appropriation, and open innovation in dynamic and systemic innovation contexts. We present the case of four generations of mobile telecommunications systems (covering the period 1980...... and technologies to benefit from openness and appropriation of innovation. Our analysis shows that the discussion of competitiveness and appropriability needs to be expanded from the focal appropriability regime and complementary assets to the larger context of the innovation ecosystem and its cooperative...... and competitive actor relations, with dispersed complementary and substitute assets and technologies. Consequently, the shaping of complementary and substitute appropriability regimes is central when strategizing in dynamic and systemic innovation contexts. This holds important implications for the management...

  11. Evaluation of Parallel and Fan-Beam Data Acquisition Geometries and Strategies for Myocardial SPECT Imaging

    Science.gov (United States)

    Qi, Yujin; Tsui, B. M. W.; Gilland, K. L.; Frey, E. C.; Gullberg, G. T.

    2004-06-01

    This study evaluates myocardial SPECT images obtained from parallel-hole (PH) and fan-beam (FB) collimator geometries using both circular-orbit (CO) and noncircular-orbit (NCO) acquisitions. A newly developed 4-D NURBS-based cardiac-torso (NCAT) phantom was used to simulate the /sup 99m/Tc-sestamibi uptakes in human torso with myocardial defects in the left ventricular (LV) wall. Two phantoms were generated to simulate patients with thick and thin body builds. Projection data including the effects of attenuation, collimator-detector response and scatter were generated using SIMSET Monte Carlo simulations. A large number of photon histories were generated such that the projection data were close to noise free. Poisson noise fluctuations were then added to simulate the count densities found in clinical data. Noise-free and noisy projection data were reconstructed using the iterative OS-EM reconstruction algorithm with attenuation compensation. The reconstructed images from noisy projection data show that the noise levels are lower for the FB as compared to the PH collimator due to increase in detected counts. The NCO acquisition method provides slightly better resolution and small improvement in defect contrast as compared to the CO acquisition method in noise-free reconstructed images. Despite lower projection counts the NCO shows the same noise level as the CO in the attenuation corrected reconstruction images. The results from the channelized Hotelling observer (CHO) study show that FB collimator is superior to PH collimator in myocardial defect detection, but the NCO shows no statistical significant difference from the CO for either PH or FB collimator. In conclusion, our results indicate that data acquisition using NCO makes a very small improvement in the resolution over CO for myocardial SPECT imaging. This small improvement does not make a significant difference on myocardial defect detection. However, an FB collimator provides better defect detection than a

  12. Role of environmental variability in the evolution of life history strategies.

    Science.gov (United States)

    Hastings, A; Caswell, H

    1979-09-01

    We reexamine the role of environmental variability in the evolution of life history strategies. We show that normally distributed deviations in the quality of the environment should lead to normally distributed deviations in the logarithm of year-to-year survival probabilities, which leads to interesting consequences for the evolution of annual and perennial strategies and reproductive effort. We also examine the effects of using differing criteria to determine the outcome of selection. Some predictions of previous theory are reversed, allowing distinctions between r and K theory and a theory based on variability. However, these distinctions require information about both the environment and the selection process not required by current theory.

  13. Oncolytic Immunotherapy: Conceptual Evolution, Current Strategies, and Future Perspectives

    Directory of Open Access Journals (Sweden)

    Zong Sheng Guo

    2017-05-01

    Full Text Available The concept of oncolytic virus (OV-mediated cancer therapy has been shifted from an operational virotherapy paradigm to an immunotherapy. OVs often induce immunogenic cell death (ICD of cancer cells, and they may interact directly with immune cells as well to prime antitumor immunity. We and others have developed a number of strategies to further stimulate antitumor immunity and to productively modulate the tumor microenvironment (TME for potent and sustained antitumor immune cell activity. First, OVs have been engineered or combined with other ICD inducers to promote more effective T cell cross-priming, and in many cases, the breaking of functional immune tolerance. Second, OVs may be armed to express Th1-stimulatory cytokines/chemokines or costimulators to recruit and sustain the potent antitumor immunity into the TME to focus their therapeutic activity within the sites of disease. Third, combinations of OV with immunomodulatory drugs or antibodies that recondition the TME have proven to be highly promising in early studies. Fourth, combinations of OVs with other immunotherapeutic regimens (such as prime-boost cancer vaccines, CAR T cells; armed with bispecific T-cell engagers have also yielded promising preliminary findings. Finally, OVs have been combined with immune checkpoint blockade, with robust antitumor efficacy being observed in pilot evaluations. Despite some expected hurdles for the rapid translation of OV-based state-of-the-art protocols, we believe that a cohort of these novel approaches will join the repertoire of standard cancer treatment options in the near future.

  14. A Targeted Enrichment Strategy for Massively Parallel Sequencing of Angiosperm Plastid Genomes

    Directory of Open Access Journals (Sweden)

    Gregory W. Stull

    2013-02-01

    Full Text Available Premise of the study: We explored a targeted enrichment strategy to facilitate rapid and low-cost next-generation sequencing (NGS of numerous complete plastid genomes from across the phylogenetic breadth of angiosperms. Methods and Results: A custom RNA probe set including the complete sequences of 22 previously sequenced eudicot plastomes was designed to facilitate hybridization-based targeted enrichment of eudicot plastid genomes. Using this probe set and an Agilent SureSelect targeted enrichment kit, we conducted an enrichment experiment including 24 angiosperms (22 eudicots, two monocots, which were subsequently sequenced on a single lane of the Illumina GAIIx with single-end, 100-bp reads. This approach yielded nearly complete to complete plastid genomes with exceptionally high coverage (mean coverage: 717×, even for the two monocots. Conclusions: Our enrichment experiment was highly successful even though many aspects of the capture process employed were suboptimal. Hence, significant improvements to this methodology are feasible. With this general approach and probe set, it should be possible to sequence more than 300 essentially complete plastid genomes in a single Illumina GAIIx lane (achieving 50× mean coverage. However, given the complications of pooling numerous samples for multiplex sequencing and the limited number of barcodes (e.g., 96 available in commercial kits, we recommend 96 samples as a current practical maximum for multiplex plastome sequencing. This high-throughput approach should facilitate large-scale plastid genome sequencing at any level of phylogenetic diversity in angiosperms.

  15. Developing Marketing Higher Education Strategies Based on Students’ Satisfaction Evolution

    Directory of Open Access Journals (Sweden)

    Andreea Orîndaru

    2015-09-01

    Full Text Available The educational system worldwide is currently under the spotlight as it shows significant signs of an ongoing crisis in its search for resources, visibility in the crowded market and significance to the ever-changing society. Within this framework, higher education institutions (HEIs are taking significant actions for maintaining students as clients of their educational services. As competition on this market is becoming stronger, HEIs face difficulties in keeping students, leading them to a continuous evaluation of student satisfaction indicators. Beyond HEIs’ managers, researchers in marketing higher education have contributed to the development of a comprehensive literature where still very few have forwarded a longitudinal research model for student satisfaction evaluation despite the need for such approaches. Given this context, the current paper presents a first step towards a longitudinal study as it displays, in a compare and contrast vision, the results of two different quantitative research projects developed in the same student community, with the same objective, but conducted in two different years. Among the most significant results of this research refer to an important decline in students’ satisfaction with a significant increase in the number of students that have a neutral perception. This is highly expected to determine a major impact on university’s overall performance and, therefore, it constitutes a strong argument for determining underlying causes, and especially developing the appropriate marketing strategies to tackle with these issues. Based on this result and other similar research outcomes, strategic and tactic recommendations are granted in the final part of this paper.

  16. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    Science.gov (United States)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  17. Mixed-integer evolution strategies for parameter optimization and their applications to medical image analysis

    NARCIS (Netherlands)

    Li, Rui

    2009-01-01

    The target of this work is to extend the canonical Evolution Strategies (ES) from traditional real-valued parameter optimization domain to mixed-integer parameter optimization domain. This is necessary because there exist numerous practical optimization problems from industry in which the set of

  18. Teaching evolution (and all of biology) more effectively: Strategies for engagement, critical reasoning, and confronting misconceptions.

    Science.gov (United States)

    Nelson, Craig E

    2008-08-01

    The strength of the evidence supporting evolution has increased markedly since the discovery of DNA but, paradoxically, public resistance to accepting evolution seems to have become stronger. A key dilemma is that science faculty have often continued to teach evolution ineffectively, even as the evidence that traditional ways of teaching are inferior has become stronger and stronger. Three pedagogical strategies that together can make a large difference in students' understanding and acceptance of evolution are extensive use of interactive engagement, a focus on critical thinking in science (especially on comparisons and explicit criteria) and using both of these in helping the students actively compare their initial conceptions (and publicly popular misconceptions) with more fully scientific conceptions. The conclusion that students' misconceptions must be dealt with systematically can be difficult for faculty who are teaching evolution since much of the students' resistance is framed in religious terms and one might be reluctant to address religious ideas in class. Applications to teaching evolution are illustrated with examples that address criteria and critical thinking, standard geology versus flood geology, evolutionary developmental biology versus organs of extreme perfection, and the importance of using humans as a central example. It is also helpful to bridge the false dichotomy, seen by many students, between atheistic evolution versus religious creationism. These applications are developed in detail and are intended to be sufficient to allow others to use these approaches in their teaching. Students and other faculty were quite supportive of these approaches as implemented in my classes.

  19. Parallel evolution of the glycogen synthase 1 (muscle) gene Gys1 between Old World and New World fruit bats (Order: Chiroptera).

    Science.gov (United States)

    Fang, Lu; Shen, Bin; Irwin, David M; Zhang, Shuyi

    2014-10-01

    Glycogen synthase, which catalyzes the synthesis of glycogen, is especially important for Old World (Pteropodidae) and New World (Phyllostomidae) fruit bats that ingest high-carbohydrate diets. Glycogen synthase 1, encoded by the Gys1 gene, is the glycogen synthase isozyme that functions in muscles. To determine whether Gys1 has undergone adaptive evolution in bats with carbohydrate-rich diets, in comparison to insect-eating sister bat taxa, we sequenced the coding region of the Gys1 gene from 10 species of bats, including two Old World fruit bats (Pteropodidae) and a New World fruit bat (Phyllostomidae). Our results show no evidence for positive selection in the Gys1 coding sequence on the ancestral Old World and the New World Artibeus lituratus branches. Tests for convergent evolution indicated convergence of the sequences and one parallel amino acid substitution (T395A) was detected on these branches, which was likely driven by natural selection.

  20. Parallel Evolution under Chemotherapy Pressure in 29 Breast Cancer Cell Lines Results in Dissimilar Mechanisms of Resistance

    DEFF Research Database (Denmark)

    Tegze, Balint; Szallasi, Zoltan Imre; Haltrich, Iren

    2012-01-01

    Background: Developing chemotherapy resistant cell lines can help to identify markers of resistance. Instead of using a panel of highly heterogeneous cell lines, we assumed that truly robust and convergent pattern of resistance can be identified in multiple parallel engineered derivatives of only...

  1. Directed evolution combined with synthetic biology strategies expedite semi-rational engineering of genes and genomes.

    Science.gov (United States)

    Kang, Zhen; Zhang, Junli; Jin, Peng; Yang, Sen

    2015-01-01

    Owing to our limited understanding of the relationship between sequence and function and the interaction between intracellular pathways and regulatory systems, the rational design of enzyme-coding genes and de novo assembly of a brand-new artificial genome for a desired functionality or phenotype are difficult to achieve. As an alternative approach, directed evolution has been widely used to engineer genomes and enzyme-coding genes. In particular, significant developments toward DNA synthesis, DNA assembly (in vitro or in vivo), recombination-mediated genetic engineering, and high-throughput screening techniques in the field of synthetic biology have been matured and widely adopted, enabling rapid semi-rational genome engineering to generate variants with desired properties. In this commentary, these novel tools and their corresponding applications in the directed evolution of genomes and enzymes are discussed. Moreover, the strategies for genome engineering and rapid in vitro enzyme evolution are also proposed.

  2. Application of evolution strategy algorithm for optimization of a single-layer sound absorber

    Directory of Open Access Journals (Sweden)

    Morteza Gholamipoor

    2014-12-01

    Full Text Available Depending on different design parameters and limitations, optimization of sound absorbers has always been a challenge in the field of acoustic engineering. Various methods of optimization have evolved in the past decades with innovative method of evolution strategy gaining more attention in the recent years. Based on their simplicity and straightforward mathematical representations, single-layer absorbers have been widely used in both engineering and industrial applications and an optimized design for these absorbers has become vital. In the present study, the method of evolution strategy algorithm is used for optimization of a single-layer absorber at both a particular frequency and an arbitrary frequency band. Results of the optimization have been compared against different methods of genetic algorithm and penalty functions which are proved to be favorable in both effectiveness and accuracy. Finally, a single-layer absorber is optimized in a desired range of frequencies that is the main goal of an industrial and engineering optimization process.

  3. APPLICATION OF RESTART COVARIANCE MATRIX ADAPTATION EVOLUTION STRATEGY (RCMA-ES TO GENERATION EXPANSION PLANNING PROBLEM

    Directory of Open Access Journals (Sweden)

    K. Karthikeyan

    2012-10-01

    Full Text Available This paper describes the application of an evolutionary algorithm, Restart Covariance Matrix Adaptation Evolution Strategy (RCMA-ES to the Generation Expansion Planning (GEP problem. RCMA-ES is a class of continuous Evolutionary Algorithm (EA derived from the concept of self-adaptation in evolution strategies, which adapts the covariance matrix of a multivariate normal search distribution. The original GEP problem is modified by incorporating Virtual Mapping Procedure (VMP. The GEP problem of a synthetic test systems for 6-year, 14-year and 24-year planning horizons having five types of candidate units is considered. Two different constraint-handling methods are incorporated and impact of each method has been compared. In addition, comparison and validation has also made with dynamic programming method.

  4. Evolution strategies and multi-objective optimization of permanent magnet motor

    DEFF Research Database (Denmark)

    Andersen, Søren Bøgh; Santos, Ilmar

    2012-01-01

    When designing a permanent magnet motor, several geometry and material parameters are to be defined. This is not an easy task, as material properties and magnetic fields are highly non-linear and the design of a motor is therefore often an iterative process. From an engineering point of view, we...... of evolution strategies, ES to effectively design and optimize parameters of permanent magnet motors. Single as well as multi-objective optimization procedures are carried out. A modified way of creating the strategy parameters for the ES algorithm is also proposed and has together with the standard ES...

  5. Evolution of learning strategies in temporally and spatially variable environments: a review of theory.

    Science.gov (United States)

    Aoki, Kenichi; Feldman, Marcus W

    2014-02-01

    The theoretical literature from 1985 to the present on the evolution of learning strategies in variable environments is reviewed, with the focus on deterministic dynamical models that are amenable to local stability analysis, and on deterministic models yielding evolutionarily stable strategies. Individual learning, unbiased and biased social learning, mixed learning, and learning schedules are considered. A rapidly changing environment or frequent migration in a spatially heterogeneous environment favors individual learning over unbiased social learning. However, results are not so straightforward in the context of learning schedules or when biases in social learning are introduced. The three major methods of modeling temporal environmental change--coevolutionary, two-timescale, and information decay--are compared and shown to sometimes yield contradictory results. The so-called Rogers' paradox is inherent in the two-timescale method as originally applied to the evolution of pure strategies, but is often eliminated when the other methods are used. Moreover, Rogers' paradox is not observed for the mixed learning strategies and learning schedules that we review. We believe that further theoretical work is necessary on learning schedules and biased social learning, based on models that are logically consistent and empirically pertinent. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Critical dynamics in the evolution of stochastic strategies for the iterated prisoner's dilemma.

    Directory of Open Access Journals (Sweden)

    Dimitris Iliopoulos

    2010-10-01

    Full Text Available The observed cooperation on the level of genes, cells, tissues, and individuals has been the object of intense study by evolutionary biologists, mainly because cooperation often flourishes in biological systems in apparent contradiction to the selfish goal of survival inherent in Darwinian evolution. In order to resolve this paradox, evolutionary game theory has focused on the Prisoner's Dilemma (PD, which incorporates the essence of this conflict. Here, we encode strategies for the iterated Prisoner's Dilemma (IPD in terms of conditional probabilities that represent the response of decision pathways given previous plays. We find that if these stochastic strategies are encoded as genes that undergo Darwinian evolution, the environmental conditions that the strategies are adapting to determine the fixed point of the evolutionary trajectory, which could be either cooperation or defection. A transition between cooperative and defective attractors occurs as a function of different parameters such as mutation rate, replacement rate, and memory, all of which affect a player's ability to predict an opponent's behavior. These results imply that in populations of players that can use previous decisions to plan future ones, cooperation depends critically on whether the players can rely on facing the same strategies that they have adapted to. Defection, on the other hand, is the optimal adaptive response in environments that change so quickly that the information gathered from previous plays cannot usefully be integrated for a response.

  7. Evolution of learning strategies in temporally and spatially variable environments: A review of theory

    Science.gov (United States)

    Aoki, Kenichi; Feldman, Marcus W.

    2013-01-01

    The theoretical literature from 1985 to the present on the evolution of learning strategies in variable environments is reviewed, with the focus on deterministic dynamical models that are amenable to local stability analysis, and on deterministic models yielding evolutionarily stable strategies. Individual learning, unbiased and biased social learning, mixed learning, and learning schedules are considered. A rapidly changing environment or frequent migration in a spatially heterogeneous environment favors individual learning over unbiased social learning. However, results are not so straightforward in the context of learning schedules or when biases in social learning are introduced. The three major methods of modeling temporal environmental change – coevolutionary, two-timescale, and information decay – are compared and shown to sometimes yield contradictory results. The so-called Rogers’ paradox is inherent in the two-timescale method as originally applied to the evolution of pure strategies, but is often eliminated when the other methods are used. Moreover, Rogers’ paradox is not observed for the mixed learning strategies and learning schedules that we review. We believe that further theoretical work is necessary on learning schedules and biased social learning, based on models that are logically consistent and empirically pertinent. PMID:24211681

  8. Strategy evolution driven by switching probabilities in structured multi-agent systems

    Science.gov (United States)

    Zhang, Jianlei; Chen, Zengqiang; Li, Zhiqi

    2017-10-01

    Evolutionary mechanism driving the commonly seen cooperation among unrelated individuals is puzzling. Related models for evolutionary games on graphs traditionally assume that players imitate their successful neighbours with higher benefits. Notably, an implicit assumption here is that players are always able to acquire the required pay-off information. To relax this restrictive assumption, a contact-based model has been proposed, where switching probabilities between strategies drive the strategy evolution. However, the explicit and quantified relation between a player's switching probability for her strategies and the number of her neighbours remains unknown. This is especially a key point in heterogeneously structured system, where players may differ in the numbers of their neighbours. Focusing on this, here we present an augmented model by introducing an attenuation coefficient and evaluate its influence on the evolution dynamics. Results show that the individual influence on others is negatively correlated with the contact numbers specified by the network topologies. Results further provide the conditions under which the coexisting strategies can be calculated analytically.

  9. The Evolution of Diapsid Reproductive Strategy with Inferences about Extinct Taxa.

    Directory of Open Access Journals (Sweden)

    Jason R Moore

    Full Text Available Diapsids show an extremely wide range of reproductive strategies. Offspring may receive no parental care, care from only one sex, care from both parents, or care under more complex regimes. Young may vary from independent, super-precocial hatchlings to altricial neonates needing much care before leaving the nest. Parents can invest heavily in a few young, or less so in a larger number. Here we examine the evolution of these traits across a composite phylogeny spanning the extant diapsids and including the limited number of extinct taxa for which reproductive strategies can be well constrained. Generalized estimating equation(GEE-based phylogenetic comparative methods demonstrate the influences of body mass, parental care strategy and hatchling maturity on clutch volume across the diapsids. The influence of polygamous reproduction is not important despite a large sample size. Applying the results of these models to the dinosaurs supports the hypothesis of paternal care (male only in derived non-avian theropods, previously suggested based on simpler analyses. These data also suggest that sauropodomorphs did not care for their young. The evolution of parental-care occurs in an almost linear series of transitions. Paternal care rarely gives rise to other care strategies. Where hatchling condition changes, diapsids show an almost unidirectional tendency of evolution towards increased altriciality. Transitions to social monogamy from the ancestral state in diapsids, where both sexes are polygamous, are common. In contrast, once evolved, polygyny and polyandry are very evolutionarily stable. Polygyny and maternal care correlate, as do polyandry and paternal care. Ancestral-character estimation (ACE of these care strategies with the character transition likelihoods estimated from the original data gives good confidence at most important nodes. These analyses suggest that the basalmost diapsids had no parental care. Crocodilians independently evolved

  10. Co-Evolution of Opinion and Strategy in Persuasion Dynamics:. AN Evolutionary Game Theoretical Approach

    Science.gov (United States)

    Ding, Fei; Liu, Yun; Li, Yong

    In this paper, a new model of opinion formation within the framework of evolutionary game theory is presented. The model simulates strategic situations when people are in opinion discussion. Heterogeneous agents adjust their behaviors to the environment during discussions, and their interacting strategies evolve together with opinions. In the proposed game, we take into account payoff discount to join a discussion, and the situation that people might drop out of an unpromising game. Analytical and emulational results show that evolution of opinion and strategy always tend to converge, with utility threshold, memory length, and decision uncertainty parameters influencing the convergence time. The model displays different dynamical regimes when we set differently the rule when people are at a loss in strategy.

  11. Evolution of strategies and competition in the international airline industry: a practical analysis using porter's competitive forces model

    OpenAIRE

    Zannoni, Niccolò

    2013-01-01

    This master thesis describes the evolution of the competition and strategies in the international airline industry. It studies the industry before and after deregulation, using the competitive forces model.

  12. Vortex particle method in parallel computations on graphical processing units used in study of the evolution of vortex structures

    International Nuclear Information System (INIS)

    Kudela, Henryk; Kosior, Andrzej

    2014-01-01

    Understanding the dynamics and the mutual interaction among various types of vortical motions is a key ingredient in clarifying and controlling fluid motion. In the paper several different cases related to vortex tube interactions are presented. Due to problems with very long computation times on the single processor, the vortex-in-cell (VIC) method is implemented on the multicore architecture of a graphics processing unit (GPU). Numerical results of leapfrogging of two vortex rings for inviscid and viscous fluid are presented as test cases for the new multi-GPU implementation of the VIC method. Influence of the Reynolds number on the reconnection process is shown for two examples: antiparallel vortex tubes and orthogonally offset vortex tubes. Our aim is to show the great potential of the VIC method for solutions of three-dimensional flow problems and that the VIC method is very well suited for parallel computation. (paper)

  13. Parallel Evolution of High-Level Aminoglycoside Resistance in Escherichia coli Under Low and High Mutation Supply Rates

    Directory of Open Access Journals (Sweden)

    Claudia Ibacache-Quiroga

    2018-03-01

    Full Text Available Antibiotic resistance is a major concern in public health worldwide, thus there is much interest in characterizing the mutational pathways through which susceptible bacteria evolve resistance. Here we use experimental evolution to explore the mutational pathways toward aminoglycoside resistance, using gentamicin as a model, under low and high mutation supply rates. Our results show that both normo and hypermutable strains of Escherichia coli are able to develop resistance to drug dosages > 1,000-fold higher than the minimal inhibitory concentration for their ancestors. Interestingly, such level of resistance was often associated with changes in susceptibility to other antibiotics, most prominently with increased resistance to fosfomycin. Whole-genome sequencing revealed that all resistant derivatives presented diverse mutations in five common genetic elements: fhuA, fusA and the atpIBEFHAGDC, cyoABCDE, and potABCD operons. Despite the large number of mutations acquired, hypermutable strains did not pay, apparently, fitness cost. In contrast to recent studies, we found that the mutation supply rate mainly affected the speed (tempo but not the pattern (mode of evolution: both backgrounds acquired the mutations in the same order, although the hypermutator strain did it faster. This observation is compatible with the adaptive landscape for high-level gentamicin resistance being relatively smooth, with few local maxima; which might be a common feature among antibiotics for which resistance involves multiple loci.

  14. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  15. Availability of public goods shapes the evolution of competing metabolic strategies.

    Science.gov (United States)

    Bachmann, Herwig; Fischlechner, Martin; Rabbers, Iraes; Barfa, Nakul; Branco dos Santos, Filipe; Molenaar, Douwe; Teusink, Bas

    2013-08-27

    Tradeoffs provide a rationale for the outcome of natural selection. A prominent example is the negative correlation between the growth rate and the biomass yield in unicellular organisms. This tradeoff leads to a dilemma, where the optimization of growth rate is advantageous for an individual, whereas the optimization of the biomass yield would be advantageous for a population. High-rate strategies are observed in a broad variety of organisms such as Escherichia coli, yeast, and cancer cells. Growth in suspension cultures favors fast-growing organisms, whereas spatial structure is of importance for the evolution of high-yield strategies. Despite this realization, experimental methods to directly select for increased yield are lacking. We here show that the serial propagation of a microbial population in a water-in-oil emulsion allows selection of strains with increased biomass yield. The propagation in emulsion creates a spatially structured environment where the growth-limiting substrate is privatized for populations founded by individual cells. Experimental evolution of several isogenic Lactococcus lactis strains demonstrated the existence of a tradeoff between growth rate and biomass yield as an apparent Pareto front. The underlying mutations altered glucose transport and led to major shifts between homofermentative and heterofermentative metabolism, accounting for the changes in metabolic efficiency. The results demonstrated the impact of privatizing a public good on the evolutionary outcome between competing metabolic strategies. The presented approach allows the investigation of fundamental questions in biology such as the evolution of cooperation, cell-cell interactions, and the relationships between environmental and metabolic constraints.

  16. Evolution of learned strategy choice in a frequency-dependent game.

    Science.gov (United States)

    Katsnelson, Edith; Motro, Uzi; Feldman, Marcus W; Lotem, Arnon

    2012-03-22

    In frequency-dependent games, strategy choice may be innate or learned. While experimental evidence in the producer-scrounger game suggests that learned strategy choice may be common, a recent theoretical analysis demonstrated that learning by only some individuals prevents learning from evolving in others. Here, however, we model learning explicitly, and demonstrate that learning can easily evolve in the whole population. We used an agent-based evolutionary simulation of the producer-scrounger game to test the success of two general learning rules for strategy choice. We found that learning was eventually acquired by all individuals under a sufficient degree of environmental fluctuation, and when players were phenotypically asymmetric. In the absence of sufficient environmental change or phenotypic asymmetries, the correct target for learning seems to be confounded by game dynamics, and innate strategy choice is likely to be fixed in the population. The results demonstrate that under biologically plausible conditions, learning can easily evolve in the whole population and that phenotypic asymmetry is important for the evolution of learned strategy choice, especially in a stable or mildly changing environment.

  17. Implementation of Evolution Strategies (ES Algorithm to Optimization Lovebird Feed Composition

    Directory of Open Access Journals (Sweden)

    Agung Mustika Rizki

    2017-05-01

    Full Text Available Lovebird current society, especially popular among bird lovers. Some people began to try to develop the cultivation of these birds. In the cultivation process to consider the composition of feed to produce a quality bird. Determining the feed is not easy because it must consider the cost and need for vitamin Lovebird. This problem can be solved by the algorithm Evolution Strategies (ES. Based on test results obtained optimal fitness value of 0.3125 using a population size of 100 and optimal fitness value of 0.3267 in the generation of 1400. 

  18. Biodiversity Meets Neuroscience: From the Sequencing Ship (Ship-Seq) to Deciphering Parallel Evolution of Neural Systems in Omic’s Era

    Science.gov (United States)

    Moroz, Leonid L.

    2015-01-01

    The origins of neural systems and centralized brains are one of the major transitions in evolution. These events might occur more than once over 570–600 million years. The convergent evolution of neural circuits is evident from a diversity of unique adaptive strategies implemented by ctenophores, cnidarians, acoels, molluscs, and basal deuterostomes. But, further integration of biodiversity research and neuroscience is required to decipher critical events leading to development of complex integrative and cognitive functions. Here, we outline reference species and interdisciplinary approaches in reconstructing the evolution of nervous systems. In the “omic” era, it is now possible to establish fully functional genomics laboratories aboard of oceanic ships and perform sequencing and real-time analyses of data at any oceanic location (named here as Ship-Seq). In doing so, fragile, rare, cryptic, and planktonic organisms, or even entire marine ecosystems, are becoming accessible directly to experimental and physiological analyses by modern analytical tools. Thus, we are now in a position to take full advantages from countless “experiments” Nature performed for us in the course of 3.5 billion years of biological evolution. Together with progress in computational and comparative genomics, evolutionary neuroscience, proteomic and developmental biology, a new surprising picture is emerging that reveals many ways of how nervous systems evolved. As a result, this symposium provides a unique opportunity to revisit old questions about the origins of biological complexity. PMID:26163680

  19. Converging evolution leads to near maximal junction diversity through parallel mechanisms in B and T cell receptors

    Science.gov (United States)

    Benichou, Jennifer I. C.; van Heijst, Jeroen W. J.; Glanville, Jacob; Louzoun, Yoram

    2017-08-01

    T and B cell receptor (TCR and BCR) complementarity determining region 3 (CDR3) genetic diversity is produced through multiple diversification and selection stages. Potential holes in the CDR3 repertoire were argued to be linked to immunodeficiencies and diseases. In contrast with BCRs, TCRs have practically no Dβ germline genetic diversity, and the question emerges as to whether they can produce a diverse CDR3 repertoire. In order to address the genetic diversity of the adaptive immune system, appropriate quantitative measures for diversity and large-scale sequencing are required. Such a diversity method should incorporate the complex diversification mechanisms of the adaptive immune response and the BCR and TCR loci structure. We combined large-scale sequencing and diversity measures to show that TCRs have a near maximal CDR3 genetic diversity. Specifically, TCR have a larger junctional and V germline diversity, which starts more 5‧ in Vβ than BCRs. Selection decreases the TCR repertoire diversity, but does not affect BCR repertoire. As a result, TCR is as diverse as BCR repertoire, with a biased CDR3 length toward short TCRs and long BCRs. These differences suggest parallel converging evolutionary tracks to reach the required diversity to avoid holes in the CDR3 repertoire.

  20. Laboratory Evolution to Alternating Substrate Environments Yields Distinct Phenotypic and Genetic Adaptive Strategies

    DEFF Research Database (Denmark)

    Sandberg, Troy E.; Lloyd, Colton J.; Palsson, Bernhard O.

    2017-01-01

    conditions and different adaptation strategies depending on the substrates being switched between; in some environments, a persistent "generalist" strain developed, while in another, two "specialist" subpopulations arose that alternated dominance. Diauxic lag phenotype varied across the generalists...... maintain simple, static culturing environments so as to reduce selection pressure complexity. In this study, we investigated the adaptive strategies underlying evolution to fluctuating environments by evolving Escherichia coli to conditions of frequently switching growth substrate. Characterization...... of evolved strains via a number of different data types revealed the various genetic and phenotypic changes implemented in pursuit of growth optimality and how these differed across the different growth substrates and switching protocols. This work not only helps to establish general principles of adaptation...

  1. Intrasexual competition facilitates the evolution of alternative mating strategies in a colour polymorphic fish.

    Science.gov (United States)

    Hurtado-Gonzales, Jorge L; Uy, J Albert C

    2010-12-23

    Intense competition for access to females can lead to males exploiting different components of sexual selection, and result in the evolution of alternative mating strategies (AMSs). Males of Poecilia parae, a colour polymorphic fish, exhibit five distinct phenotypes: drab-coloured (immaculata), striped (parae), structural-coloured (blue) and carotenoid-based red and yellow morphs. Previous work indicates that immaculata males employ a sneaker strategy, whereas the red and yellow morphs exploit female preferences for carotenoid-based colours. Mating strategies favouring the maintenance of the other morphs remain to be determined. Here, we report the role of agonistic male-male interactions in influencing female mating preferences and male mating success, and in facilitating the evolution of AMSs. Our study reveals variation in aggressiveness among P. parae morphs during indirect and direct interactions with sexually receptive females. Two morphs, parae and yellow, use aggression to enhance their mating success (i.e., number of copulations) by 1) directly monopolizing access to females, and 2) modifying female preferences after winning agonistic encounters. Conversely, we found that the success of the drab-coloured immaculata morph, which specializes in a sneak copulation strategy, relies in its ability to circumvent both male aggression and female choice when facing all but yellow males. Strong directional selection is expected to deplete genetic variation, yet many species show striking genetically-based polymorphisms. Most studies evoke frequency dependent selection to explain the persistence of such variation. Consistent with a growing body of evidence, our findings suggest that a complex form of balancing selection may alternatively explain the evolution and maintenance of AMSs in a colour polymorphic fish. In particular, this study demonstrates that intrasexual competition results in phenotypically distinct males exhibiting clear differences in their levels of

  2. Intrasexual competition facilitates the evolution of alternative mating strategies in a colour polymorphic fish

    Directory of Open Access Journals (Sweden)

    Uy J Albert C

    2010-12-01

    Full Text Available Abstract Background Intense competition for access to females can lead to males exploiting different components of sexual selection, and result in the evolution of alternative mating strategies (AMSs. Males of Poecilia parae, a colour polymorphic fish, exhibit five distinct phenotypes: drab-coloured (immaculata, striped (parae, structural-coloured (blue and carotenoid-based red and yellow morphs. Previous work indicates that immaculata males employ a sneaker strategy, whereas the red and yellow morphs exploit female preferences for carotenoid-based colours. Mating strategies favouring the maintenance of the other morphs remain to be determined. Here, we report the role of agonistic male-male interactions in influencing female mating preferences and male mating success, and in facilitating the evolution of AMSs. Results Our study reveals variation in aggressiveness among P. parae morphs during indirect and direct interactions with sexually receptive females. Two morphs, parae and yellow, use aggression to enhance their mating success (i.e., number of copulations by 1 directly monopolizing access to females, and 2 modifying female preferences after winning agonistic encounters. Conversely, we found that the success of the drab-coloured immaculata morph, which specializes in a sneak copulation strategy, relies in its ability to circumvent both male aggression and female choice when facing all but yellow males. Conclusions Strong directional selection is expected to deplete genetic variation, yet many species show striking genetically-based polymorphisms. Most studies evoke frequency dependent selection to explain the persistence of such variation. Consistent with a growing body of evidence, our findings suggest that a complex form of balancing selection may alternatively explain the evolution and maintenance of AMSs in a colour polymorphic fish. In particular, this study demonstrates that intrasexual competition results in phenotypically distinct

  3. Effect of migration based on strategy and cost on the evolution of cooperation

    International Nuclear Information System (INIS)

    Li, Yan; Ye, Hang

    2015-01-01

    Highlights: •Propose a migration based on strategy and cost in the Prisoner’s Dilemma Game. •The level of cooperation without mutation is higher than that with mutation. •Increased costs have no effect on the level of cooperation without mutation. •The level of cooperation decreases with the increase in cost with mutation. •An optimal density value ρ resulting in the maximum level of cooperation exists. -- Abstract: Humans consider not only their own ability but also the environment around them during the process of migration. Based on this fact, we introduce migration based on strategy and cost into the Spatial Prisoner’s Dilemma Game on a two-dimensional grid. The migration means that agents cannot move when all of the neighbors are cooperators; otherwise, agents move with a probability related to payoff and cost. The result obtained by the computer simulation shows that the moving mechanism based on strategy and cost improves the level of cooperation in a wide parameter space. This occurs because movement based on strategy effectively keeps the cooperative clusters and because movement based on cost effectively regulates the rate of movement. Both types of movement provide a favorable guarantee for the evolution of stable cooperation under the mutation rate q = 0.0. In addition, we discuss the effectiveness of the migration mechanism in the evolution of cooperation under the mutation rate q = 0.001. The result indicates that a higher level of cooperation is obtained at a lower migration cost, whereas cooperation is suppressed at a higher migration cost. Our work may provide an effective method for understanding the emergence of cooperation in our society

  4. [The evolution of nursing shortage and strategies to face it: a longitudinal study in 11 hospitals].

    Science.gov (United States)

    Stringhetta, Francesca; Dal Ponte, Adriana; Palese, Alvisa

    2012-01-01

    To describe the perception of the evolution of nursing shortage from 2000 to 2009 according to Nursing Coordinators and the strategies to face it. Nursing coordinators of 11 hospitals or districts of Friuli Venezia Giulia, Trentino Alto Adige and Veneto regions were interviewed in 2000, 2004 and 2009 to collect data and assess their perception on nurses' shortage. In the first interview the medium gap between staff planned and in service was -5.4%; in 2004 -9.4% and in 2009 -3.3%. The shortage, once with a seasonal trend is now constant and appreciated in all the wards. In years 2000 and 2004 on average 5 strategies to face the shortage were implemented, in 2009 7. No systematic strategies have been used with the exception of the unification of wards, mainly during summer for letting people go on holydays. According to Nursing Coordinators the effects of the shortage are already observable (although not quantified) on patients and nurses. The nurses' shortage has been one of the challenges of the last 10 years. Its causes have changed but not the strategies implemented.

  5. A novel role for Mc1r in the parallel evolution of depigmentation in independent populations of the cavefish Astyanax mexicanus.

    Directory of Open Access Journals (Sweden)

    Joshua B Gross

    2009-01-01

    Full Text Available The evolution of degenerate characteristics remains a poorly understood phenomenon. Only recently has the identification of mutations underlying regressive phenotypes become accessible through the use of genetic analyses. Focusing on the Mexican cave tetra Astyanax mexicanus, we describe, here, an analysis of the brown mutation, which was first described in the literature nearly 40 years ago. This phenotype causes reduced melanin content, decreased melanophore number, and brownish eyes in convergent cave forms of A. mexicanus. Crosses demonstrate non-complementation of the brown phenotype in F(2 individuals derived from two independent cave populations: Pachón and the linked Yerbaniz and Japonés caves, indicating the same locus is responsible for reduced pigmentation in these fish. While the brown mutant phenotype arose prior to the fixation of albinism in Pachón cave individuals, it is unclear whether the brown mutation arose before or after the fixation of albinism in the linked Yerbaniz/Japonés caves. Using a QTL approach combined with sequence and functional analyses, we have discovered that two distinct genetic alterations in the coding sequence of the gene Mc1r cause reduced pigmentation associated with the brown mutant phenotype in these caves. Our analysis identifies a novel role for Mc1r in the evolution of degenerative phenotypes in blind Mexican cavefish. Further, the brown phenotype has arisen independently in geographically separate caves, mediated through different mutations of the same gene. This example of parallelism indicates that certain genes are frequent targets of mutation in the repeated evolution of regressive phenotypes in cave-adapted species.

  6. Parallel and costly changes to cellular immunity underlie the evolution of parasitoid resistance in three Drosophila species.

    Directory of Open Access Journals (Sweden)

    John E McGonigle

    2017-10-01

    Full Text Available A priority for biomedical research is to understand the causes of variation in susceptibility to infection. To investigate genetic variation in a model system, we used flies collected from single populations of three different species of Drosophila and artificially selected them for resistance to the parasitoid wasp Leptopilina boulardi, and found that survival rates increased 3 to 30 fold within 6 generations. Resistance in all three species involves a large increase in the number of the circulating hemocytes that kill parasitoids. However, the different species achieve this in different ways, with D. melanogaster moving sessile hemocytes into circulation while the other species simply produce more cells. Therefore, the convergent evolution of the immune phenotype has different developmental bases. These changes are costly, as resistant populations of all three species had greatly reduced larval survival. In all three species resistance is only costly when food is in short supply, and resistance was rapidly lost from D. melanogaster populations when food is restricted. Furthermore, evolving resistance to L. boulardi resulted in cross-resistance against other parasitoids. Therefore, whether a population evolves resistance will depend on ecological conditions including food availability and the presence of different parasite species.

  7. Maternal Lipid Provisioning Mirrors Evolution of Reproductive Strategies in Direct-Developing Whelks.

    Science.gov (United States)

    Carrasco, Sergio A; Phillips, Nicole E; Sewell, Mary A

    2016-06-01

    The energetic input that offspring receive from their mothers is a well-studied maternal effect that can influence the evolution of life histories. Using the offspring of three sympatric whelks: Cominella virgata (one embryo per capsule); Cominella maculosa (multiple embryos per capsule); and Haustrum scobina (multiple embryos per capsule and nurse-embryo consumption), we examined how contrasting reproductive strategies mediate inter- and intraspecific differences in hatchling provisioning. Total lipid content (as measured in μg hatchling(-1) ± SE) was unrelated to size among the 3 species; the hatchlings of H. scobina were the smallest but had the highest lipid content (33.8 ± 8.1 μg hatchling(-1)). In offspring of C. maculosa, lipid content was 6.6 ± 0.4 μg hatchling(-1), and in offspring of C. virgata, it was 21.7 ± 3.2 μg hatchling(-1) The multi-encapsulated hatchlings of C. maculosa and H. scobina were the only species that contained the energetic lipids, wax ester (WE) and methyl ester (ME). However, the overall composition of energetic lipid between hatchlings of the two Cominella species reflected strong affinities of taxonomy, suggesting a phylogenetic evolution of the non-adelphophagic development strategy. Inter- and intracapsular variability in sibling provisioning was highest in H. scobina, a finding that implies less control of allocation to individual hatchlings in this adelphophagic developer. We suggest that interspecific variability of lipids offers a useful approach to understanding the evolution of maternal provisioning in direct-developing species. © 2016 Marine Biological Laboratory.

  8. Evolution of the heteroharmonic strategy for target-range computation in the echolocation of Mormoopidae.

    Directory of Open Access Journals (Sweden)

    Emanuel C Mora

    2013-06-01

    Full Text Available Echolocating bats use the time elapsed from biosonar pulse emission to the arrival of echo (defined as echo-delay to assess target-distance. Target-distance is represented in the brain by delay-tuned neurons that are classified as either heteroharmonic or homoharmormic. Heteroharmonic neurons respond more strongly to pulse-echo pairs in which the timing of the pulse is given by the fundamental biosonar harmonic while the timing of echoes is provided by one (or several of the higher order harmonics. On the other hand, homoharmonic neurons are tuned to the echo delay between similar harmonics in the emitted pulse and echo. It is generally accepted that heteroharmonic computations are advantageous over homoharmonic computations; i.e. heteroharmonic neurons receive information from call and echo in different frequency-bands which helps to avoid jamming between pulse and echo signals. Heteroharmonic neurons have been found in two species of the family Mormoopidae (Pteronotus parnellii and Pteronotus quadridens and in Rhinolophus rouxi. Recently, it was proposed that heteroharmonic target-range computations are a primitive feature of the genus Pteronotus that was preserved in the evolution of the genus. Here we review recent findings on the evolution of echolocation in Mormoopidae, and try to link those findings to the evolution of the heteroharmonic computation strategy. We stress the hypothesis that the ability to perform heteroharmonic computations evolved separately from the ability of using long constant-frequency echolocation calls, high duty cycle echolocation and Doppler Shift Compensation. Also, we present the idea that heteroharmonic computations might have been of advantage for categorizing prey size, hunting eared insects and living in large conspecific colonies. We make five testable predictions that might help future investigations to clarify the evolution of the heteroharmonic echolocation in Mormoopidae and other families.

  9. High performance computing of density matrix renormalization group method for 2-dimensional model. Parallelization strategy toward peta computing

    International Nuclear Information System (INIS)

    Yamada, Susumu; Igarashi, Ryo; Machida, Masahiko; Imamura, Toshiyuki; Okumura, Masahiko; Onishi, Hiroaki

    2010-01-01

    We parallelize the density matrix renormalization group (DMRG) method, which is a ground-state solver for one-dimensional quantum lattice systems. The parallelization allows us to extend the applicable range of the DMRG to n-leg ladders i.e., quasi two-dimension cases. Such an extension is regarded to bring about several breakthroughs in e.g., quantum-physics, chemistry, and nano-engineering. However, the straightforward parallelization requires all-to-all communications between all processes which are unsuitable for multi-core systems, which is a mainstream of current parallel computers. Therefore, we optimize the all-to-all communications by the following two steps. The first one is the elimination of the communications between all processes by only rearranging data distribution with the communication data amount kept. The second one is the avoidance of the communication conflict by rescheduling the calculation and the communication. We evaluate the performance of the DMRG method on multi-core supercomputers and confirm that our two-steps tuning is quite effective. (author)

  10. Parallel Representation of Value-Based and Finite State-Based Strategies in the Ventral and Dorsal Striatum.

    Directory of Open Access Journals (Sweden)

    Makoto Ito

    2015-11-01

    Full Text Available Previous theoretical studies of animal and human behavioral learning have focused on the dichotomy of the value-based strategy using action value functions to predict rewards and the model-based strategy using internal models to predict environmental states. However, animals and humans often take simple procedural behaviors, such as the "win-stay, lose-switch" strategy without explicit prediction of rewards or states. Here we consider another strategy, the finite state-based strategy, in which a subject selects an action depending on its discrete internal state and updates the state depending on the action chosen and the reward outcome. By analyzing choice behavior of rats in a free-choice task, we found that the finite state-based strategy fitted their behavioral choices more accurately than value-based and model-based strategies did. When fitted models were run autonomously with the same task, only the finite state-based strategy could reproduce the key feature of choice sequences. Analyses of neural activity recorded from the dorsolateral striatum (DLS, the dorsomedial striatum (DMS, and the ventral striatum (VS identified significant fractions of neurons in all three subareas for which activities were correlated with individual states of the finite state-based strategy. The signal of internal states at the time of choice was found in DMS, and for clusters of states was found in VS. In addition, action values and state values of the value-based strategy were encoded in DMS and VS, respectively. These results suggest that both the value-based strategy and the finite state-based strategy are implemented in the striatum.

  11. Parallel Representation of Value-Based and Finite State-Based Strategies in the Ventral and Dorsal Striatum.

    Science.gov (United States)

    Ito, Makoto; Doya, Kenji

    2015-11-01

    Previous theoretical studies of animal and human behavioral learning have focused on the dichotomy of the value-based strategy using action value functions to predict rewards and the model-based strategy using internal models to predict environmental states. However, animals and humans often take simple procedural behaviors, such as the "win-stay, lose-switch" strategy without explicit prediction of rewards or states. Here we consider another strategy, the finite state-based strategy, in which a subject selects an action depending on its discrete internal state and updates the state depending on the action chosen and the reward outcome. By analyzing choice behavior of rats in a free-choice task, we found that the finite state-based strategy fitted their behavioral choices more accurately than value-based and model-based strategies did. When fitted models were run autonomously with the same task, only the finite state-based strategy could reproduce the key feature of choice sequences. Analyses of neural activity recorded from the dorsolateral striatum (DLS), the dorsomedial striatum (DMS), and the ventral striatum (VS) identified significant fractions of neurons in all three subareas for which activities were correlated with individual states of the finite state-based strategy. The signal of internal states at the time of choice was found in DMS, and for clusters of states was found in VS. In addition, action values and state values of the value-based strategy were encoded in DMS and VS, respectively. These results suggest that both the value-based strategy and the finite state-based strategy are implemented in the striatum.

  12. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  13. Directed evolution strategies for enantiocomplementary haloalkane dehalogenases: from chemical waste to enantiopure building blocks.

    Science.gov (United States)

    van Leeuwen, Jan G E; Wijma, Hein J; Floor, Robert J; van der Laan, Jan-Metske; Janssen, Dick B

    2012-01-02

    We used directed evolution to obtain enantiocomplementary haloalkane dehalogenase variants that convert the toxic waste compound 1,2,3-trichloropropane (TCP) into highly enantioenriched (R)- or (S)-2,3-dichloropropan-1-ol, which can easily be converted into optically active epichlorohydrins-attractive intermediates for the synthesis of enantiopure fine chemicals. A dehalogenase with improved catalytic activity but very low enantioselectivity was used as the starting point. A strategy that made optimal use of the limited capacity of the screening assay, which was based on chiral gas chromatography, was developed. We used pair-wise site-saturation mutagenesis (SSM) of all 16 noncatalytic active-site residues during the initial two rounds of evolution. The resulting best R- and S-enantioselective variants were further improved in two rounds of site-restricted mutagenesis (SRM), with incorporation of carefully selected sets of amino acids at a larger number of positions, including sites that are more distant from the active site. Finally, the most promising mutations and positions were promoted to a combinatorial library by using a multi-site mutagenesis protocol with restricted codon sets. To guide the design of partly undefined (ambiguous) codon sets for these restricted libraries we employed structural information, the results of multiple sequence alignments, and knowledge from earlier rounds. After five rounds of evolution with screening of only 5500 clones, we obtained two strongly diverged haloalkane dehalogenase variants that give access to (R)-epichlorohydrin with 90 % ee and to (S)-epichlorohydrin with 97 % ee, containing 13 and 17 mutations, respectively, around their active sites. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. The evolution of pattern camouflage strategies in waterfowl and game birds.

    Science.gov (United States)

    Marshall, Kate L A; Gluckman, Thanh-Lan

    2015-05-01

    Visual patterns are common in animals. A broad survey of the literature has revealed that different patterns have distinct functions. Irregular patterns (e.g., stipples) typically function in static camouflage, whereas regular patterns (e.g., stripes) have a dual function in both motion camouflage and communication. Moreover, irregular and regular patterns located on different body regions ("bimodal" patterning) can provide an effective compromise between camouflage and communication and/or enhanced concealment via both static and motion camouflage. Here, we compared the frequency of these three pattern types and traced their evolutionary history using Bayesian comparative modeling in aquatic waterfowl (Anseriformes: 118 spp.), which typically escape predators by flight, and terrestrial game birds (Galliformes: 170 spp.), which mainly use a "sit and hide" strategy to avoid predation. Given these life histories, we predicted that selection would favor regular patterning in Anseriformes and irregular or bimodal patterning in Galliformes and that pattern function complexity should increase over the course of evolution. Regular patterns were predominant in Anseriformes whereas regular and bimodal patterns were most frequent in Galliformes, suggesting that patterns with multiple functions are broadly favored by selection over patterns with a single function in static camouflage. We found that the first patterns to evolve were either regular or bimodal in Anseriformes and either irregular or regular in Galliformes. In both orders, irregular patterns could evolve into regular patterns but not the reverse. Our hypothesis of increasing complexity in pattern camouflage function was supported in Galliformes but not in Anseriformes. These results reveal a trajectory of pattern evolution linked to increasing function complexity in Galliformes although not in Anseriformes, suggesting that both ecology and function complexity can have a profound influence on pattern evolution.

  15. Application of Evolution Strategies to the Design of Tracking Filters with a Large Number of Specifications

    Directory of Open Access Journals (Sweden)

    Jesús García Herrero

    2003-07-01

    Full Text Available This paper describes the application of evolution strategies to the design of interacting multiple model (IMM tracking filters in order to fulfill a large table of performance specifications. These specifications define the desired filter performance in a thorough set of selected test scenarios, for different figures of merit and input conditions, imposing hundreds of performance goals. The design problem is stated as a numeric search in the filter parameters space to attain all specifications or at least minimize, in a compromise, the excess over some specifications as much as possible, applying global optimization techniques coming from evolutionary computation field. Besides, a new methodology is proposed to integrate specifications in a fitness function able to effectively guide the search to suitable solutions. The method has been applied to the design of an IMM tracker for a real-world civil air traffic control application: the accomplishment of specifications defined for the future European ARTAS system.

  16. Cloud computing task scheduling strategy based on differential evolution and ant colony optimization

    Science.gov (United States)

    Ge, Junwei; Cai, Yu; Fang, Yiqiu

    2018-05-01

    This paper proposes a task scheduling strategy DEACO based on the combination of Differential Evolution (DE) and Ant Colony Optimization (ACO), aiming at the single problem of optimization objective in cloud computing task scheduling, this paper combines the shortest task completion time, cost and load balancing. DEACO uses the solution of the DE to initialize the initial pheromone of ACO, reduces the time of collecting the pheromone in ACO in the early, and improves the pheromone updating rule through the load factor. The proposed algorithm is simulated on cloudsim, and compared with the min-min and ACO. The experimental results show that DEACO is more superior in terms of time, cost, and load.

  17. The evolution of competitive settlement strategies in Fijian prehistory : results of excavations and radiometric dating

    International Nuclear Information System (INIS)

    Field, J.S.

    2003-01-01

    A series of excavations were completed between June 2001 and March 2002 in the Fiji Islands. The goal of this research was to investigate the evolution of competitive settlement strategies in Fijian prehistory from an archaeological and evolutionary ecological perspective. Twelve sites were excavated and mapped in the Sigatoka Valley, located in the southwestern corner of the main island of Viti Levu. Excavations were focused on determining the chronology of fortifications in the region, and the collected samples were compared to expectations based on GIS-based analyses of land productivity and historical documents pertaining to late-period warfare. Over four hundred archaeological sites have been identified in the Sigatoka Valley, and of these roughly one-third are purely defensive in configuration, with no immediate access to water or arable land. The Waikato Archaeological Dating Fund provided four radiometric dates for three defensive sites, and one site associated with a production area. (author). 6 refs., 1 fig

  18. Convergent adaptive evolution in marginal environments: unloading transposable elements as a common strategy among mangrove genomes.

    Science.gov (United States)

    Lyu, Haomin; He, Ziwen; Wu, Chung-I; Shi, Suhua

    2018-01-01

    Several clades of mangrove trees independently invade the interface between land and sea at the margin of woody plant distribution. As phenotypic convergence among mangroves is common, the possibility of convergent adaptation in their genomes is quite intriguing. To study this molecular convergence, we sequenced multiple mangrove genomes. In this study, we focused on the evolution of transposable elements (TEs) in relation to the genome size evolution. TEs, generally considered genomic parasites, are the most common components of woody plant genomes. Analyzing the long terminal repeat-retrotransposon (LTR-RT) type of TE, we estimated their death rates by counting solo-LTRs and truncated elements. We found that all lineages of mangroves massively and convergently reduce TE loads in comparison to their nonmangrove relatives; as a consequence, genome size reduction happens independently in all six mangrove lineages; TE load reduction in mangroves can be attributed to the paucity of young elements; the rarity of young LTR-RTs is a consequence of fewer births rather than access death. In conclusion, mangrove genomes employ a convergent strategy of TE load reduction by suppressing element origination in their independent adaptation to a new environment. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  19. Mini-review: Strategies for Variation and Evolution of Bacterial Antigens

    Science.gov (United States)

    Foley, Janet

    2015-01-01

    Across the eubacteria, antigenic variation has emerged as a strategy to evade host immunity. However, phenotypic variation in some of these antigens also allows the bacteria to exploit variable host niches as well. The specific mechanisms are not shared-derived characters although there is considerable convergent evolution and numerous commonalities reflecting considerations of natural selection and biochemical restraints. Unlike in viruses, mechanisms of antigenic variation in most bacteria involve larger DNA movement such as gene conversion or DNA rearrangement, although some antigens vary due to point mutations or modified transcriptional regulation. The convergent evolution that promotes antigenic variation integrates various evolutionary forces: these include mutations underlying variant production; drift which could remove alleles especially early in infection or during life history phases in arthropod vectors (when the bacterial population size goes through a bottleneck); selection not only for any particular variant but also for the mechanism for the production of variants (i.e., selection for mutability); and overcoming negative selection against variant production. This review highlights the complexities of drivers of antigenic variation, in particular extending evaluation beyond the commonly cited theory of immune evasion. A deeper understanding of the diversity of purpose and mechanisms of antigenic variation in bacteria will contribute to greater insight into bacterial pathogenesis, ecology and coevolution with hosts. PMID:26288700

  20. Study of the microstructural evolution and rheological behavior by semisolid compression between parallel plate of the alloy A356 solidified under a continuously rotating magnetic field

    International Nuclear Information System (INIS)

    Leiva L, Ricardo; Sanchez V, Cristian; Mannheim C, Rodolfo; Bustos C, Oscar

    2004-01-01

    This work presents a study of the rheological behavior of the alloy A356, with and without continuous magnetic agitation during its solidification, in semisolid state. The evaluation was performed using a parallel plate compression rheometer with the digital recording of position and time data. The microstructural evolution was also studied at the start and end of the semisolid compression test. The procedure involved tests of short cylinders extracted from billets with a non dendritic microstructure cast under a continuously rotating magnetic field. These pieces were tested in different solid fractions, at constant charges and at constant deformation velocities. When the test is carried out at a constant charge the equation can be determined that governs the rheological behavior of the material in semisolid state following a power grade of two Ostwald-de-Waele parameters. But when the test is done at a constant deformation speed the flow behavior of the material can be described in the semisolid shaping process. The results obtained show that the morphology of the phases present in the microstructure is highly relevant to its rheological behavior. A globular coalesced rosette to rosette type microstructure was found to have the typical behavior of a fluid when shaped in a semisolid state but a cast dendritic structure did not behave this way. Also the Arrhenius type dependence of viscosity with temperature was established (CW)

  1. Genome evolution in an ancient bacteria-ant symbiosis: parallel gene loss among Blochmannia spanning the origin of the ant tribe Camponotini

    Directory of Open Access Journals (Sweden)

    Laura E. Williams

    2015-04-01

    Full Text Available Stable associations between bacterial endosymbionts and insect hosts provide opportunities to explore genome evolution in the context of established mutualisms and assess the roles of selection and genetic drift across host lineages and habitats. Blochmannia, obligate endosymbionts of ants of the tribe Camponotini, have coevolved with their ant hosts for ∼40 MY. To investigate early events in Blochmannia genome evolution across this ant host tribe, we sequenced Blochmannia from two divergent host lineages, Colobopsis obliquus and Polyrhachis turneri, and compared them with four published genomes from Blochmannia of Camponotus sensu stricto. Reconstructed gene content of the last common ancestor (LCA of these six Blochmannia genomes is reduced (690 protein coding genes, consistent with rapid gene loss soon after establishment of the symbiosis. Differential gene loss among Blochmannia lineages has affected cellular functions and metabolic pathways, including DNA replication and repair, vitamin biosynthesis and membrane proteins. Blochmannia of P. turneri (i.e., B. turneri encodes an intact DnaA chromosomal replication initiation protein, demonstrating that loss of dnaA was not essential for establishment of the symbiosis. Based on gene content, B. obliquus and B. turneri are unable to provision hosts with riboflavin. Of the six sequenced Blochmannia, B. obliquus is the earliest diverging lineage (i.e., the sister group of other Blochmannia sampled and encodes the fewest protein-coding genes and the most pseudogenes. We identified 55 genes involved in parallel gene loss, including glutamine synthetase, which may participate in nitrogen recycling. Pathways for biosynthesis of coenzyme A, terpenoids and riboflavin were lost in multiple lineages, suggesting relaxed selection on the pathway after inactivation of one component. Analysis of Illumina read datasets did not detect evidence of plasmids encoding missing functions, nor the presence of

  2. Host-parasite coevolution can promote the evolution of seed banking as a bet-hedging strategy.

    Science.gov (United States)

    Verin, Mélissa; Tellier, Aurélien

    2018-04-20

    Seed (egg) banking is a common bet-hedging strategy maximizing the fitness of organisms facing environmental unpredictability by the delayed emergence of offspring. Yet, this condition often requires fast and drastic stochastic shifts between good and bad years. We hypothesize that the host seed banking strategy can evolve in response to coevolution with parasites because the coevolutionary cycles promote a gradually changing environment over longer times than seed persistence. We study the evolution of host germination fraction as a quantitative trait using both pairwise competition and multiple mutant competition methods, while the germination locus can be genetically linked or unlinked with the host locus under coevolution. In a gene-for-gene model of coevolution, hosts evolve a seed bank strategy under unstable coevolutionary cycles promoted by moderate to high costs of resistance or strong disease severity. Moreover, when assuming genetic linkage between coevolving and germination loci, the resistant genotype always evolves seed banking in contrast to susceptible hosts. Under a matching-allele interaction, both hosts' genotypes exhibit the same seed banking strategy irrespective of the genetic linkage between loci. We suggest host-parasite coevolution as an additional hypothesis for the evolution of seed banking as a temporal bet-hedging strategy. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.

  3. PREDIKSI CHURN DAN SEGMENTASI PELANGGAN MENGGUNAKAN BACKPROPAGATION NEURAL NETWORK BERBASIS EVOLUTION STRATEGIES

    Directory of Open Access Journals (Sweden)

    Junta Zeniarja

    2015-05-01

    Full Text Available Pelanggan merupakan bagian penting dalam memastikan keunggulan dan kelangsungan hidup perusahaan. Oleh karena itu perlu untuk memiliki sistem manajemen untuk memastikan pelanggan tetap setia dan tidak pindah ke pesaing lain, yang dikenal sebagai manajemen churn. Prediksi churn pelanggan adalah bagian dari manajemen churn, yang memprediksi perilaku pelanggan dengan klasifikasi pelanggan setia dan mana yang cenderung pindah ke kompetitor lain. Keakuratan prediksi ini mutlak diperlukan karena tingginya tingkat migrasi pelanggan ke perusahaan pesaing. Hal ini penting karena biaya yang digunakan untuk meraih pelanggan baru jauh lebih tinggi dibandingkan dengan mempertahankan loyalitas pelanggan yang sudah ada. Meskipun banyak studi tentang prediksi churn pelanggan yang telah dilakukan, penelitian lebih lanjut masih diperlukan untuk meningkatkan akurasi prediksi. Penelitian ini akan membahas penggunaan teknik data mining Backpropagation Neural Network (BPNN in hybrid dengan Strategi Evolution (ES untuk atribut bobot. Validasi model dilakukan dengan menggunakan validasi Palang 10-Fold dan evaluasi pengukuran dilakukan dengan menggunakan matriks kebingungan dan Area bawah ROC Curve (AUC. Hasil percobaan menunjukkan bahwa hibrida BPNN dengan ES mencapai kinerja yang lebih baik daripada Basic BPNN. Kata kunci: data mining, churn, prediksi, backpropagation neural network, strategi evolusi.

  4. Agent-based models of strategies for the emergence and evolution of grammatical agreement.

    Directory of Open Access Journals (Sweden)

    Katrien Beuls

    Full Text Available Grammatical agreement means that features associated with one linguistic unit (for example number or gender become associated with another unit and then possibly overtly expressed, typically with morphological markers. It is one of the key mechanisms used in many languages to show that certain linguistic units within an utterance grammatically depend on each other. Agreement systems are puzzling because they can be highly complex in terms of what features they use and how they are expressed. Moreover, agreement systems have undergone considerable change in the historical evolution of languages. This article presents language game models with populations of agents in order to find out for what reasons and by what cultural processes and cognitive strategies agreement systems arise. It demonstrates that agreement systems are motivated by the need to minimize combinatorial search and semantic ambiguity, and it shows, for the first time, that once a population of agents adopts a strategy to invent, acquire and coordinate meaningful markers through social learning, linguistic self-organization leads to the spontaneous emergence and cultural transmission of an agreement system. The article also demonstrates how attested grammaticalization phenomena, such as phonetic reduction and conventionalized use of agreement markers, happens as a side effect of additional economizing principles, in particular minimization of articulatory effort and reduction of the marker inventory. More generally, the article illustrates a novel approach for studying how key features of human languages might emerge.

  5. Fusing enacted and expected mimicry generates a winning strategy that promotes the evolution of cooperation.

    Science.gov (United States)

    Fischer, Ilan; Frid, Alex; Goerg, Sebastian J; Levin, Simon A; Rubenstein, Daniel I; Selten, Reinhard

    2013-06-18

    Although cooperation and trust are essential features for the development of prosperous populations, they also put cooperating individuals at risk for exploitation and abuse. Empirical and theoretical evidence suggests that the solution to the problem resides in the practice of mimicry and imitation, the expectation of opponent's mimicry and the reliance on similarity indices. Here we fuse the principles of enacted and expected mimicry and condition their application on two similarity indices to produce a model of mimicry and relative similarity. Testing the model in computer simulations of behavioral niches, populated with agents that enact various strategies and learning algorithms, shows how mimicry and relative similarity outperforms all the opponent strategies it was tested against, pushes noncooperative opponents toward extinction, and promotes the development of cooperative populations. The proposed model sheds light on the evolution of cooperation and provides a blueprint for intentional induction of cooperation within and among populations. It is suggested that reducing conflict intensities among human populations necessitates (i) instigation of social initiatives that increase the perception of similarity among opponents and (ii) efficient lowering of the similarity threshold of the interaction, the minimal level of similarity that makes cooperation advisable.

  6. Surface spintronics enhanced photo-catalytic hydrogen evolution: Mechanisms, strategies, challenges and future

    Science.gov (United States)

    Zhang, Wenyan; Gao, Wei; Zhang, Xuqiang; Li, Zhen; Lu, Gongxuan

    2018-03-01

    Hydrogen is a green energy carrier with high enthalpy and zero environmental pollution emission characteristics. Photocatalytic hydrogen evolution (HER) is a sustainable and promising way to generate hydrogen. Despite of great achievements in photocatalytic HER research, its efficiency is still limited due to undesirable electron transfer loss, high HER over-potential and low stability of some photocatalysts, which lead to their unsatisfied performance in HER and anti-photocorrosion properties. In recent years, many spintronics works have shown their enhancing effects on photo-catalytic HER. For example, it was reported that spin polarized photo-electrons could result in higher photocurrents and HER turn-over frequency (up to 200%) in photocatalytic system. Two strategies have been developed for electron spin polarizing, which resort to heavy atom effect and magnetic induction respectively. Both theoretical and experimental studies show that controlling spin state of OHrad radicals in photocatalytic reaction can not only decrease OER over-potential (even to 0 eV) of water splitting, but improve stability and charge lifetime of photocatalysts. A convenient strategy have been developed for aligning spin state of OHrad by utilizing chiral molecules to spin filter photo-electrons. By chiral-induced spin filtering, electron polarization can approach to 74%, which is significantly larger than some traditional transition metal devices. Those achievements demonstrate bright future of spintronics in enhancing photocatalytic HER, nevertheless, there is little work systematically reviewing and analysis this topic. This review focuses on recent achievements of spintronics in photocatalytic HER study, and systematically summarizes the related mechanisms and important strategies proposed. Besides, the challenges and developing trends of spintronics enhanced photo-catalytic HER research are discussed, expecting to comprehend and explore such interdisciplinary research in

  7. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    Science.gov (United States)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative

  8. Strategy Dynamics through a Demand-Based Lens: The Evolution of Market Boundaries, Resource Rents and Competitive Positions

    OpenAIRE

    Adner, Ron; Zemsky, Peter

    2003-01-01

    We develop a novel approach to the dynamics of business strategy that is grounded in an explicit treatment of consumer choice when technologies improve over time. We address the evolution of market boundaries, resource rents and competitive positions by adapting models of competition with differentiated products. Our model is consistent with the central strategy assertion that competitive interactions are governed by superior value creation and competitive advantage. More importantly, it show...

  9. Distinct neural and neuromuscular strategies underlie independent evolution of simplified advertisement calls.

    Science.gov (United States)

    Leininger, Elizabeth C; Kelley, Darcy B

    2013-04-07

    Independent or convergent evolution can underlie phenotypic similarity of derived behavioural characters. Determining the underlying neural and neuromuscular mechanisms sheds light on how these characters arose. One example of evolutionarily derived characters is a temporally simple advertisement call of male African clawed frogs (Xenopus) that arose at least twice independently from a more complex ancestral pattern. How did simplification occur in the vocal circuit? To distinguish shared from divergent mechanisms, we examined activity from the calling brain and vocal organ (larynx) in two species that independently evolved simplified calls. We find that each species uses distinct neural and neuromuscular strategies to produce the simplified calls. Isolated Xenopus borealis brains produce fictive vocal patterns that match temporal patterns of actual male calls; the larynx converts nerve activity faithfully into muscle contractions and single clicks. In contrast, fictive patterns from isolated Xenopus boumbaensis brains are short bursts of nerve activity; the isolated larynx requires stimulus bursts to produce a single click of sound. Thus, unlike X. borealis, the output of the X. boumbaensis hindbrain vocal pattern generator is an ancestral burst-type pattern, transformed by the larynx into single clicks. Temporally simple advertisement calls in genetically distant species of Xenopus have thus arisen independently via reconfigurations of central and peripheral vocal neuroeffectors.

  10. Evolution of post-deployment indicators of oral health on the Family Health Strategy

    Science.gov (United States)

    Palacio, Danielle da Costa; Vazquez, Fabiana de Lima; Ramos, Danielle Viana Ribeiro; Peres, Stela Verzinhasse; Pereira, Antonio Carlos; Guerra, Luciane Miranda; Cortellazzi, Karine Laura; Bulgareli, Jaqueline Vilela

    2014-01-01

    Objective To evaluate the evolution of indicators after the implementation of 21 Oral Healthcare Teams in the Family Health Strategy. Methods We used data from outpatient services of Oral Healthcare Teams to evaluate efficiency, access, percentage of absences and emergencies of oral healthcare professionals who worked in the partnership between the Sociedade Beneficente Israelita Brasileira Hospital Albert Einstein and the Secretaria Municipal de Saúde de São Paulo, during the period 2009-2011. Results Percentages of emergencies, income, and access showed a significant difference during the period analyzed, but no difference for percentage of absences was found. When monthly analysis was made, it is noteworthy that at the beginning of service implementation a fluctuation occurred, which may indicate that the work was consolidated over the months, becoming capable of receiving new professionals and increasing the population served. Comparison of the indicators in that period with the goals agreed upon between the Sociedade Beneficente Israelita Brasileira Hospital Albert Einstein and the Secretaria Municipal de Saúde de São Paulo made it possible to notice that the Oral Health Teams had a good performance. Conclusion The results showed that the goals were achieved reflecting the increasing number of professionals, the maturing of work processes in the Oral Health Teams, and optimization of the manpower available to perform the activities. Understanding these results will be important to guide the actions of Oral Health Teams for the following years and to assess the achievement of goals. PMID:25295445

  11. Governance of sustainable development: co-evolution of corporate and political strategies

    International Nuclear Information System (INIS)

    Bleischwitz, R.; College of Europe, Bruges

    2004-01-01

    This article proposes a policy framework for analysing corporate governance toward sustainable development. The aim is to set up a framework for analysing market evolution toward sustainability. In the first section, the paper briefly refers to recent theories about both market and government failures that express scepticism about the way that framework conditions for market actors are set. For this reason, multi-layered governance structures seem advantageous if new solutions are to be developed in policy areas concerned with long-term change and stepwise internalisation of externalities. The paper introduces the principle of regulated self-regulation. With regard to corporate actors' interests, it presents recent insights from theories about the knowledge-based firm, where the creation of new knowledge is based on the absorption of societal views. The result is greater scope for the endogenous internalisation of externalities, which leads to a variety of new and different corporate strategies. Because governance has to set incentives for quite a diverse set of actors in their daily operations, the paper finally discusses innovation-inducing regulation. In both areas, regulated self-regulation and innovation-inducing regulation, corporate and political governance co-evolve. The paper concludes that these co-evolutionary mechanisms may assume some of the stabilising and orientating functions previously exercised by framing activities of the state. In such a view, the government's main function is to facilitate learning processes, thus departing from the state's function as known from welfare economics. (author)

  12. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  13. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  14. Engineering better biomass-degrading ability into a GH11 xylanase using a directed evolution strategy

    Directory of Open Access Journals (Sweden)

    Song Letian

    2012-01-01

    Full Text Available Abstract Background Improving the hydrolytic performance of hemicellulases on lignocellulosic biomass is of considerable importance for second-generation biorefining. To address this problem, and also to gain greater understanding of structure-function relationships, especially related to xylanase action on complex biomass, we have implemented a combinatorial strategy to engineer the GH11 xylanase from Thermobacillus xylanilyticus (Tx-Xyn. Results Following in vitro enzyme evolution and screening on wheat straw, nine best-performing clones were identified, which display mutations at positions 3, 6, 27 and 111. All of these mutants showed increased hydrolytic activity on wheat straw, and solubilized arabinoxylans that were not modified by the parental enzyme. The most active mutants, S27T and Y111T, increased the solubilization of arabinoxylans from depleted wheat straw 2.3-fold and 2.1-fold, respectively, in comparison to the wild-type enzyme. In addition, five mutants, S27T, Y111H, Y111S, Y111T and S27T-Y111H increased total hemicellulose conversion of intact wheat straw from 16.7%tot. xyl (wild-type Tx-Xyn to 18.6% to 20.4%tot. xyl. Also, all five mutant enzymes exhibited a better ability to act in synergy with a cellulase cocktail (Accellerase 1500, thus procuring increases in overall wheat straw hydrolysis. Conclusions Analysis of the results allows us to hypothesize that the increased hydrolytic ability of the mutants is linked to (i improved ligand binding in a putative secondary binding site, (ii the diminution of surface hydrophobicity, and/or (iii the modification of thumb flexibility, induced by mutations at position 111. Nevertheless, the relatively modest improvements that were observed also underline the fact that enzyme engineering alone cannot overcome the limits imposed by the complex organization of the plant cell wall and the lignin barrier.

  15. Experimental evolution and the adjustment of metabolic strategies in lactic acid bacteria

    NARCIS (Netherlands)

    Bachmann, Herwig; Molenaar, Douwe; Branco dos Santos, Filipe; Teusink, Bas

    2017-01-01

    Experimental evolution of microbes has gained lots of interest in recent years, mainly due to the ease of strain characterisation through next-generation sequencing. While evolutionary and systems biologists use experimental evolution to address fundamental questions in their respective fields,

  16. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  17. Impact of weed control strategies on resistance evolution in Alopecurus myosuroides – a long-term field trial

    Directory of Open Access Journals (Sweden)

    Ulber, Lena

    2016-02-01

    Full Text Available The impact of various herbicide strategies on populations of Alopecurus myosuroides is investigated in a longterm field trial situated in Wendhausen (Germany since 2009. In the initial years of the field experiment, resistant populations were selected by means of repeated application of the same herbicide active ingredients. For the selection of different resistance profiles, herbicides with actives from different HRAC groups were used. The herbicide actives flupyrsulfuron, isoproturon und fenoxaprop-P were applied for two years on large plots. In a succeeding field trial starting in 2011, it was investigated if the now existing resistant field populations could be controlled by various herbicide strategies. Eight different strategies consisting of various herbicide combinations were tested. Resistance evolution was investigated by means of plant counts and molecular genetic analysis.

  18. Niche-driven evolution of metabolic and life-history strategies in natural and domesticated populations of Saccharomyces cerevisiae

    Directory of Open Access Journals (Sweden)

    Sicard Delphine

    2009-12-01

    Full Text Available Abstract Background Variation of resource supply is one of the key factors that drive the evolution of life-history strategies, and hence the interactions between individuals. In the yeast Saccharomyces cerevisiae, two life-history strategies related to different resource utilization have been previously described in strains from different industrial origins. In this work, we analyzed metabolic traits and life-history strategies in a broader collection of yeast strains sampled in various ecological niches (forest, human body, fruits, laboratory and industrial environments. Results By analysing the genetic and plastic variation of six life-history and three metabolic traits, we showed that S. cerevisiae populations harbour different strategies depending on their ecological niches. On one hand, the forest and laboratory strains, referred to as extreme "ants", reproduce quickly, reach a large carrying capacity and a small cell size in fermentation, but have a low reproduction rate in respiration. On the other hand, the industrial strains, referred to as extreme "grasshoppers", reproduce slowly, reach a small carrying capacity but have a big cell size in fermentation and a high reproduction rate in respiration. "Grasshoppers" have usually higher glucose consumption rate than "ants", while they produce lower quantities of ethanol, suggesting that they store cell resources rather than secreting secondary products to cross-feed or poison competitors. The clinical and fruit strains are intermediate between these two groups. Conclusions Altogether, these results are consistent with a niche-driven evolution of S. cerevisiae, with phenotypic convergence of populations living in similar habitat. They also revealed that competition between strains having contrasted life-history strategies ("ants" and "grasshoppers" seems to occur at low frequency or be unstable since opposite life-history strategies appeared to be maintained in distinct ecological niches.

  19. Co-evolution of Industry Strategies and Government Policies: The Case of the Brazilian Automotive Industry

    Directory of Open Access Journals (Sweden)

    Roberto Gonzalez Duarte

    2017-07-01

    Full Text Available This study examines the evolution of the automotive industry in Brazil and its key drivers. We argue that the rules of the game – industry policies – are an outcome of exchanges between the host government and industry. These arise from changes in economic and political environments and interdependence between industry and the country’s economy. To this end, we draw upon literature on institutions and co-evolution to understand the industry footprint over a 50-year period, as well as its relationship with changes in government policies. This study generates new insights on institutional and co-evolution political perspectives by showing that the rules of the game are not only the making of the government, but are also the result of interdependencies between industry and government.

  20. A Video Game for Learning Brain Evolution: A Resource or a Strategy?

    Science.gov (United States)

    Barbosa Gomez, Luisa Fernanda; Bohorquez Sotelo, Maria Cristina; Roja Higuera, Naydu Shirley; Rodriguez Mendoza, Brigitte Julieth

    2016-01-01

    Learning resources are part of the educational process of students. However, how video games act as learning resources in a population that has not selected the virtual formation as their main methodology? The aim of this study was to identify the influence of a video game in the learning process of brain evolution. For this purpose, the opinions…

  1. Co-evolution of industry strategies and government policies: The case of the brazilian automotive industry

    NARCIS (Netherlands)

    Duarte, R.G. (Roberto Gonzalez); S.B. Rodrigues (Suzana)

    2017-01-01

    textabstractThis study examines the evolution of the automotive industry in Brazil and its key drivers. We argue that the rules of the game – industry policies – are an outcome of exchanges between the host government and industry. These arise from changes in economic and political environments and

  2. Directed Evolution Strategies for Enantiocomplementary Haloalkane Dehalogenases : From Chemical Waste to Enantiopure Building Blocks

    NARCIS (Netherlands)

    van Leeuwen, Jan G. E.; Wijma, Hein J.; Floor, Robert J.; van der Laan, Jan-Metske; Janssen, Dick B.

    2012-01-01

    We used directed evolution to obtain enantiocomplementary haloalkane dehalogenase variants that convert the toxic waste compound 1,2,3-trichloropropane (TCP) into highly enantioenriched (R)- or (S)-2,3-dichloropropan-1-ol, which can easily be converted into optically active

  3. Evolution of Learning among Pavlov Strategies in a Competitive Environment with Noise.

    Science.gov (United States)

    Kraines, David; Kraines, Vivian

    1995-01-01

    Pavlov denotes a family of stochastic learning strategies that achieves the mutually cooperative outcome in the iterated prisoner's dilemma against a wide variety of strategies. Although faster learners will eventually dominate a given homogeneous Pavlov population, the process must proceed through a gradual increase in the rate of learning. (JBJ)

  4. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  5. WE-DE-207A-01: Parallels in the Evolution of X-Ray Angiographic Systems and Devices Used for Minimally Invasive Endovascular Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Strother, C. [University of Wisconsin (United States)

    2016-06-15

    1. Parallels in the evolution of x-ray angiographic systems and devices used for minimally invasive endovascular therapy Charles Strother - DSA, invented by Dr. Charles Mistretta at UW-Madison, was the technology which enabled the development of minimally invasive endovascular procedures. As DSA became widely available and the potential benefits for accessing the cerebral vasculature from an endovascular approach began to be apparent, industry began efforts to develop tools for use in these procedures. Along with development of catheters, embolic materials, pushable coils and the GDC coils there was simultaneous development and improvement of 2D DSA image quality and the introduction of 3D DSA. Together, these advances resulted in an enormous expansion in the scope and numbers of minimally invasive endovascular procedures. The introduction of flat detectors for c-arm angiographic systems in 2002 provided the possibility of the angiographic suite becoming not just a location for vascular imaging where physiological assessments might also be performed. Over the last decade algorithmic and hardware advances have been sufficient to now realize this potential in clinical practice. The selection of patients for endovascular treatments is enhanced by this dual capability. Along with these advances has been a steady reduction in the radiation exposure required so that today, vascular and soft tissue images may be obtained with equal or in many cases less radiation exposure than is the case for comparable images obtained with multi-detector CT. Learning Objectives: To understand the full capabilities of today’s angiographic suite To understand how c-arm cone beam CT soft tissue imaging can be used for assessments of devices, blood flow and perfusion. Advances in real-time x-ray neuro-endovascular image guidance Stephen Rudin - Reacting to the demands on real-time image guidance for ever finer neurovascular interventions, great improvements in imaging chains are being

  6. WE-DE-207A-01: Parallels in the Evolution of X-Ray Angiographic Systems and Devices Used for Minimally Invasive Endovascular Therapy

    International Nuclear Information System (INIS)

    Strother, C.

    2016-01-01

    1. Parallels in the evolution of x-ray angiographic systems and devices used for minimally invasive endovascular therapy Charles Strother - DSA, invented by Dr. Charles Mistretta at UW-Madison, was the technology which enabled the development of minimally invasive endovascular procedures. As DSA became widely available and the potential benefits for accessing the cerebral vasculature from an endovascular approach began to be apparent, industry began efforts to develop tools for use in these procedures. Along with development of catheters, embolic materials, pushable coils and the GDC coils there was simultaneous development and improvement of 2D DSA image quality and the introduction of 3D DSA. Together, these advances resulted in an enormous expansion in the scope and numbers of minimally invasive endovascular procedures. The introduction of flat detectors for c-arm angiographic systems in 2002 provided the possibility of the angiographic suite becoming not just a location for vascular imaging where physiological assessments might also be performed. Over the last decade algorithmic and hardware advances have been sufficient to now realize this potential in clinical practice. The selection of patients for endovascular treatments is enhanced by this dual capability. Along with these advances has been a steady reduction in the radiation exposure required so that today, vascular and soft tissue images may be obtained with equal or in many cases less radiation exposure than is the case for comparable images obtained with multi-detector CT. Learning Objectives: To understand the full capabilities of today’s angiographic suite To understand how c-arm cone beam CT soft tissue imaging can be used for assessments of devices, blood flow and perfusion. Advances in real-time x-ray neuro-endovascular image guidance Stephen Rudin - Reacting to the demands on real-time image guidance for ever finer neurovascular interventions, great improvements in imaging chains are being

  7. The Evolution of Airpower Theory and Future Air Strategies for Employment in the Gap

    National Research Council Canada - National Science Library

    Brown, Francis M

    2005-01-01

    .... In regards to future military involvement and specifically the application of airpower, what are the best air strategies to pursue, not only to achieve the strategic objectives, but to facilitate...

  8. Evolution in nuclear strategy in US and Russia and its implications in arms control

    Energy Technology Data Exchange (ETDEWEB)

    Sokov, N

    2003-07-01

    Today, there is a growing tendency in war-fighting scenarios to include limited use of nuclear weapons. New developments in nuclear policy could be attributed to changes in the international situation like the multiplication of low level conflicts and the threat of terrorism. This paper analyzes the evolution of the Russian nuclear doctrine, the transformation of the US nuclear policy and their consequences on arms control. (J.S.)

  9. Pricing Strategy and the Formation and Evolution of Reference Price Perceptions in New Product Categories

    OpenAIRE

    Lowe, Ben; Alpert, Frank

    2010-01-01

    This study examines the formation and evolution of reference price perceptions in new product categories. It contributes to our understanding of pricing new products by integrating two important research streams in marketing-reference price theory and the theory of pioneer brand advantage. Prior research has focused solely on products in existing or incrementally new categories, and has typically examined fast-moving consumer goods. Using a cross-sectional experiment to study the formation of...

  10. Evolution in nuclear strategy in US and Russia and its implications in arms control

    International Nuclear Information System (INIS)

    Sokov, N.

    2003-01-01

    Today, there is a growing tendency in war-fighting scenarios to include limited use of nuclear weapons. New developments in nuclear policy could be attributed to changes in the international situation like the multiplication of low level conflicts and the threat of terrorism. This paper analyzes the evolution of the Russian nuclear doctrine, the transformation of the US nuclear policy and their consequences on arms control. (J.S.)

  11. Evolution Of The Operational Energy Strategy And Its Consideration In The Defense Acquisition Process

    Science.gov (United States)

    2016-09-01

    Figure 12. PPBE Process Flowchart . Source: AcqNotes (2016). We comment above that once a program manager has completed his major Defense...acquisition system: 1) acquisition, 2) requirements and 3) planning, programming , budgeting, and execution (PPBE). We looked at the evolution of...to gain traction or improve promulgation of key guidance and documentation for new-starts and/or upgrades to weapon system acquisition programs

  12. The RNA-world and co-evolution hypothesis and the origin of life: Implications, research strategies and perspectives

    Science.gov (United States)

    Lahav, Noam

    1993-01-01

    The applicability of the RNA-world and co-evolution hypothesis to the study of the very first stages of the origin of life is discussed. The discussion focuses on the basic differences between the two hypotheses and their implications, with regard to the reconstruction methodology, ribosome emergence, balance between ribozymes and protein enzymes, and their major difficultites. Additional complexities of the two hypotheses, such as membranes and the energy source of the first reactions, are not treated in the present work. A central element in the proposed experimental strategies is the study of the catalytic activites of very small peptides and RNA-like oligomers, according to existing, as well as to yet-to-be-invented scenarios of the two hypothesis under consideration. It is suggested that the novel directed molecular evolution technology, and molecular computational modeling, can be applied to this research. This strategy is assumed to be essential for the suggested goal of future studies of the origin of life, namely, the establishment of a `Primordial Darwinian entity'.

  13. The RNA-world and co-evolution hypotheses and the origin of life: Implications, research strategies and perspectives

    Science.gov (United States)

    Lahav, Noam

    1993-12-01

    The applicability of the RNA-world and co-evolution hypotheses to the study of the very first stages of the origin of life is discussed. The discussion focuses on the basic differences between the two hypotheses and their implications, with regard to the reconstruction methodology, ribosome emergence, balance between ribozymes and protein enzymes, and their major difficulties. Additional complexities of the two hypotheses, such as membranes and the energy source of the first reactions, are not treated in the present work. A central element in the proposed experimental strategies is the study of the catalytic activities of very small peptides and RNA-like oligomers, according to existing, as well as to yet-to-be-invented scenarios of the two hypotheses under consideration. It is suggested that the noveldirected molecular evolution technology, andmolecular computational modeling, can be applied to this research. This strategy is assumed to be essential for the suggested goal of future studies of the origin of life, namely, the establishment of a ‘Primordial Darwinian entity’.

  14. Adaptive co-evolution of strategies and network leading to optimal cooperation level in spatial prisoner's dilemma game

    International Nuclear Information System (INIS)

    Han-Shuang, Chen; Zhong-Huai, Hou; Hou-Wen, Xin; Ji-Qian, Zhang

    2010-01-01

    We study evolutionary prisoner's dilemma game on adaptive networks where a population of players co-evolves with their interaction networks. During the co-evolution process, interacted players with opposite strategies either rewire the link between them with probability p or update their strategies with probability 1 – p depending on their payoffs. Numerical simulation shows that the final network is either split into some disconnected communities whose players share the same strategy within each community or forms a single connected network in which all nodes are in the same strategy. Interestingly, the density of cooperators in the final state can be maximised in an intermediate range of p via the competition between time scale of the network dynamics and that of the node dynamics. Finally, the mean-field analysis helps to understand the results of numerical simulation. Our results may provide some insight into understanding the emergence of cooperation in the real situation where the individuals' behaviour and their relationship adaptively co-evolve. (general)

  15. Parallel Evolution in Science: The Historical Roots and Central Concepts of General Systems Theory; and "General Systems Theory,""Modern Organizational Theory," and Organizational Communication.

    Science.gov (United States)

    Lederman, Linda Costigan; Rogers, Don

    The two papers in this document focus on general systems theory. In her paper, Linda Lederman discusses the emergence and evolution of general systems theory, defines its central concepts, and draws some conclusions regarding the nature of the theory and its value as an epistemology. Don Rogers, in his paper, relates some of the important features…

  16. Parallel evolution in an invasive plant species : evolutionary changes in allocation to growth, defense, competitive ability and regrowth of invasive Jacobaea vulgaris

    NARCIS (Netherlands)

    Lin, Tiantian

    2015-01-01

    Although the introduction of invasive plant species in a given area causes economic and ecological problems, it still provides an ideal opportunity for ecologists to study evolutionary changes. According to the Evolution of Increased Competitive Ability hypothesis and Shifting Defense Hypothesis,

  17. Multi-omics Analysis Sheds Light on the Evolution and the Intracellular Lifestyle Strategies of Spotted Fever Group Rickettsia spp.

    Science.gov (United States)

    El Karkouri, Khalid; Kowalczewska, Malgorzata; Armstrong, Nicholas; Azza, Said; Fournier, Pierre-Edouard; Raoult, Didier

    2017-01-01

    Arthropod-borne Rickettsia species are obligate intracellular bacteria which are pathogenic for humans. Within this genus, Rickettsia slovaca and Rickettsia conorii cause frequent and potentially severe infections, whereas Rickettsia raoultii and Rickettsia massiliae cause rare and milder infections. All four species belong to spotted fever group (SFG) rickettsiae. However, R. slovaca and R. raoultii cause scalp eschar and neck lymphadenopathy (SENLAT) and are mainly associated with Dermacentor ticks, whereas the other two species cause Mediterranean spotted fever (MSF) and are mainly transmitted by Rhipicephalus ticks. To identify the potential genes and protein profiles and to understand the evolutionary processes that could, comprehensively, relate to the differences in virulence and pathogenicity observed between these four species, we compared their genomes and proteomes. The virulent and milder agents displayed divergent phylogenomic evolution in two major clades, whereas either SENLAT or MSF disease suggests a discrete convergent evolution of one virulent and one milder agent, despite their distant genetic relatedness. Moreover, the two virulent species underwent strong reductive genomic evolution and protein structural variations, as well as a probable loss of plasmid(s), compared to the two milder species. However, an abundance of mobilome genes was observed only in the less pathogenic species. After infecting Xenopus laevis cells, the virulent agents displayed less up-regulated than down-regulated proteins, as well as less number of identified core proteins. Furthermore, their similar and distinct protein profiles did not contain some genes (e.g., omp A/B and rick A) known to be related to rickettsial adhesion, motility and/or virulence, but may include other putative virulence-, antivirulence-, and/or disease-related proteins. The identified evolutionary forces herein may have a strong impact on intracellular expressions and strategies in these

  18. Social evolution in micro-organisms and a Trojan horse approach to medical intervention strategies.

    Science.gov (United States)

    Brown, Sam P; West, Stuart A; Diggle, Stephen P; Griffin, Ashleigh S

    2009-11-12

    Medical science is typically pitted against the evolutionary forces acting upon infective populations of bacteria. As an alternative strategy, we could exploit our growing understanding of population dynamics of social traits in bacteria to help treat bacterial disease. In particular, population dynamics of social traits could be exploited to introduce less virulent strains of bacteria, or medically beneficial alleles into infective populations. We discuss how bacterial strains adopting different social strategies can invade a population of cooperative wild-type, considering public good cheats, cheats carrying medically beneficial alleles (Trojan horses) and cheats carrying allelopathic traits (anti-competitor chemical bacteriocins or temperate bacteriophage viruses). We suggest that exploitation of the ability of cheats to invade cooperative, wild-type populations is a potential new strategy for treating bacterial disease.

  19. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  20. A New Design Strategy for Observing Lithium Oxide Growth-Evolution Interactions Using Geometric Catalyst Positioning.

    Science.gov (United States)

    Ryu, Won-Hee; Gittleson, Forrest S; Li, Jinyang; Tong, Xiao; Taylor, André D

    2016-08-10

    Understanding the catalyzed formation and evolution of lithium-oxide products in Li-O2 batteries is central to the development of next-generation energy storage technology. Catalytic sites, while effective in lowering reaction barriers, often become deactivated when placed on the surface of an oxygen electrode due to passivation by solid products. Here we investigate a mechanism for alleviating catalyst deactivation by dispersing Pd catalytic sites away from the oxygen electrode surface in a well-structured anodic aluminum oxide (AAO) porous membrane interlayer. We observe the cross-sectional product growth and evolution in Li-O2 cells by characterizing products that grow from the electrode surface. Morphological and structural details of the products in both catalyzed and uncatalyzed cells are investigated independently from the influence of the oxygen electrode. We find that the geometric decoration of catalysts far from the conductive electrode surface significantly improves the reaction reversibility by chemically facilitating the oxidation reaction through local coordination with PdO surfaces. The influence of the catalyst position on product composition is further verified by ex situ X-ray photoelectron spectroscopy and Raman spectroscopy in addition to morphological studies.

  1. CONSIDERATIONS CONCERNING THE EVOLUTION OF THE LOCAL DEVELOPMENT STRATEGY OF SOME LEADER TERRITORIES IN ARAD COUNTY

    Directory of Open Access Journals (Sweden)

    Radu Lucian Blaga

    2014-12-01

    Full Text Available Romanian National Rural Development Programme (NRDP 2007-2013 is the document, which applies EU Common Agricultural Policy in Romania as EU member state. LEADER, as part of EU Common Agricultural Policy was developed like territory planning policies focused on the rural area of intervention. It proved more effective and productive, being decided and implemented at local level by the local actors, using clear and transparent procedures for the evaluation of strategic objectives and plans, having the support of local governments and technical assistance necessary to transmit best practices. The European Agricultural Fund for Rural Development (EAFRD finances investments in LEADER axis, using intervention areas (priority 1, priority 2 and priority 3 and related measures to NRDP. These measures can be found in a variable degree at the level of the Local Development Strategy (LDS elaborated by The Local Action Groups (LAGs, LEADER territory concerned. Based on these issues, the paper seeks to present some practical considerations of the assessment of the LAG‟s activities in the implementation of strategy, scientifically linked to the portfolio analysis activities (intervention areas and measures that compound the Local Development Strategy of some LEADER entities of the Arad County. The evaluation used outcome indicators for implementation of the Strategy.

  2. the evolution of strategy: thinking war from antiquity to the present

    African Journals Online (AJOL)

    ismith

    historical sources to anchor the theoretical departure of the work further, and ... critical stances about the utility of armed forces seem to have entered a growth ... of strategy within the politico-military environment, although Heuser argues that it ... discussions influence decision-making that ultimately affects and in fact tailors.

  3. Learning strategies in excellent and average university students. Their evolution over the first year of the career

    Directory of Open Access Journals (Sweden)

    Gargallo, Bernardo

    2012-12-01

    Full Text Available The aim of this paper was to analyze the evolution of learning strategies of two groups of students, excellent and average, from 11 degrees of the UPV (Valencia/Spain in their freshman year. We used the CEVEAPEU questionnaire. The results confirmed the availability of better strategies of excellent students and also the existence of evolutionary patterns in which affective-emotional strategies decrease, such as value of the task or internal attributions, and that others increase, such as extrinsic motivation and external attributions. It seems that the student does not meet your expectations in the new context and professors have important responsibilities. El objetivo de este trabajo era analizar la evolución de las estrategias de aprendizaje de estudiantes excelentes y medios de 11 titulaciones de la V (Valencia, en su primer año. Los alumnos contestaron el cuestionario CEVEAPEU en tres momentos. Los resultados constataron mejores estrategias en los estudiantes excelentes. También confirmaron patrones evolutivos en que estrategias afectivo-emotivas relevantes disminuyen, como valor de la tarea o atribuciones internas, y se incrementan otras, como motivación extrínseca y atribuciones externas. Parece que el estudiante no satisface sus expectativas en el proceso de adaptación al nuevo contexto y ahí los profesores tienen responsabilidades ineludibles.

  4. Evolution of optimal Lévy-flight strategies in human mental searches

    Science.gov (United States)

    Radicchi, Filippo; Baronchelli, Andrea

    2012-06-01

    Recent analysis of empirical data [Radicchi, Baronchelli, and Amaral, PloS ONE1932-620310.1371/journal.pone.0029910 7, e029910 (2012)] showed that humans adopt Lévy-flight strategies when exploring the bid space in online auctions. A game theoretical model proved that the observed Lévy exponents are nearly optimal, being close to the exponent value that guarantees the maximal economical return to players. Here, we rationalize these findings by adopting an evolutionary perspective. We show that a simple evolutionary process is able to account for the empirical measurements with the only assumption that the reproductive fitness of the players is proportional to their search ability. Contrary to previous modeling, our approach describes the emergence of the observed exponent without resorting to any strong assumptions on the initial searching strategies. Our results generalize earlier research, and open novel questions in cognitive, behavioral, and evolutionary sciences.

  5. Why Keep Changing Explaining the Evolution of Singapore’s Military Strategy Since Independence

    Science.gov (United States)

    2017-03-01

    have agreed that the Dolphin strategy, with the 3G SAF as its core, is characterized by the use of “ intelligence , speed, and maneuverability in a...secure investor confidence. Any “loss of confidence in the republic’s stability or security would seriously damage its economic health.”55 The...the Singaporean leadership’s acknowledgement of the need to secure investor confidence and also of the increasing importance of the maritime domain

  6. The Evolution of the U.S. Navy’s Maritime Strategy, 1977-1986

    Science.gov (United States)

    2003-01-01

    Studies Group was designed to try to surmount the natural and artificial barriers to a free exchange of thinking that had developed over the years. In many...to The Maritime Strategy are listed separately, to aid the reader/researcher. (Admittedly, this and other artificial ty- pological devices run against... Intellignece Vessel) 124, 145, 290, 303 airborne early warning (AEW) 163, 173 airborne warning and control system (AWACS) 19, 163 aircraft 10, 17–19, 27

  7. Understanding International Product Strategy in Multinational Corporations through New Product Development Approaches and Evolution

    OpenAIRE

    Liu, Yang; Shi, Yongjiang

    2017-01-01

    International product strategy regarding global standardisation and local adaptation is one of the challenges faced by multinational corporations (MNCs). Studies in this area have tested the antecedents and consequences of standardisation/adaptation, but lack a new product development (NPD) perspective. In this study, we explore how product standardisation/adaptation is determined in the NPD context. Through a qualitative case study of four MNCs, we found three NPD approaches: multi-local, ad...

  8. The rock-paper-scissors game and the evolution of alternative male strategies

    Science.gov (United States)

    Sinervo, B.; Lively, C. M.

    1996-03-01

    MANY species exhibit colour polymorphisms associated with alternative male reproductive strategies, including territorial males and 'sneaker males' that behave and look like females1-3. The prevalence of multiple morphs is a challenge to evolutionary theory because a single strategy should prevail unless morphs have exactly equal fitness4,5 or a fitness advantage when rare6,7. We report here the application of an evolutionary stable strategy model to a three-morph mating system in the side-blotched lizard. Using parameter estimates from field data, the model predicted oscillations in morph frequency, and the frequencies of the three male morphs were found to oscillate over a six-year period in the field. The fitnesses of each morph relative to other morphs were non-transitive in that each morph could invade another morph when rare, but was itself invadable by another morph when common. Concordance between frequency-dependent selection and the among-year changes in morph fitnesses suggest that male interactions drive a dynamic 'rock-paper-scissors' game7.

  9. Cathodic electrochemical activation of Co3O4 nanoarrays: a smart strategy to significantly boost the hydrogen evolution activity.

    Science.gov (United States)

    Yang, Li; Zhou, Huang; Qin, Xin; Guo, Xiaodong; Cui, Guanwei; Asiri, Abdullah M; Sun, Xuping

    2018-02-22

    Co(hydro)oxides show unsatisfactory catalytic activity for the hydrogen evolution reaction (HER) in alkaline media, and it is thus highly desirable but still remains a challenge to design and develop Co(hydro)oxide derived materials as superb hydrogen-evolving catalysts using a facile, rapid and less energy-intensive method. Here, we propose a cathodic electrochemical activation strategy toward greatly boosted HER activity of a Co 3 O 4 nanoarray via room-temperature cathodic polarization in sodium hypophosphite solution. After activation, the overpotential significantly decreases from 260 to 73 mV to drive a geometrical catalytic current density of 10 mA cm -2 in 1.0 M KOH. Notably, this activated electrode also shows strong long-term electrochemical durability with the retention of its catalytic activity at 100 mA cm -2 for at least 40 h.

  10. An improved artificial bee colony algorithm based on balance-evolution strategy for unmanned combat aerial vehicle path planning.

    Science.gov (United States)

    Li, Bai; Gong, Li-gang; Yang, Wen-lun

    2014-01-01

    Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms.

  11. An Improved Artificial Bee Colony Algorithm Based on Balance-Evolution Strategy for Unmanned Combat Aerial Vehicle Path Planning

    Directory of Open Access Journals (Sweden)

    Bai Li

    2014-01-01

    Full Text Available Unmanned combat aerial vehicles (UCAVs have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC algorithm improved by a balance-evolution strategy (BES is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms.

  12. The evolution of health warning labels on cigarette packs: the role of precedents, and tobacco industry strategies to block diffusion

    Science.gov (United States)

    Hiilamo, Heikki; Crosbie, Eric; Glantz, Stanton A

    2013-01-01

    Objective To analyse the evolution and diffusion of health warnings on cigarette packs around the world, including tobacco industry attempts to block this diffusion. Methods We analysed tobacco industry documents and public sources to construct a database on the global evolution and diffusion of health warning labels from 1966 to 2012, and also analysed industry strategies. Results Health warning labels, especially labels with graphic elements, threaten the tobacco industry because they are a low-cost, effective measure to reduce smoking. Multinational tobacco companies did not object to voluntary innocuous warnings with ambiguous health messages, in part because they saw them as offering protection from lawsuits and local packaging regulations. The companies worked systematically at the international level to block or weaken warnings once stronger more specific warnings began to appear in the 1970s. Since 1985 in Iceland, the tobacco industry has been aware of the effectiveness of graphic health warning labels (GWHL). The industry launched an all-out attack in the early 1990s to prevent GHWLs, and was successful in delaying GHWLs internationally for nearly 10 years. Conclusions Beginning in 2005, as a result of the World Health Organisation Framework Convention on Tobacco Control (FCTC), GHWLs began to spread. Effective implementation of FCTC labelling provisions has stimulated diffusion of strong health warning labels despite industry opposition. PMID:23092884

  13. DC-Analyzer-facilitated combinatorial strategy for rapid directed evolution of functional enzymes with multiple mutagenesis sites.

    Science.gov (United States)

    Wang, Xiong; Zheng, Kai; Zheng, Huayu; Nie, Hongli; Yang, Zujun; Tang, Lixia

    2014-12-20

    Iterative saturation mutagenesis (ISM) has been shown to be a powerful method for directed evolution. In this study, the approach was modified (termed M-ISM) by combining the single-site saturation mutagenesis method with a DC-Analyzer-facilitated combinatorial strategy, aiming to evolve novel biocatalysts efficiently in the case where multiple sites are targeted simultaneously. Initially, all target sites were explored individually by constructing single-site saturation mutagenesis libraries. Next, the top two to four variants in each library were selected and combined using the DC-Analyzer-facilitated combinatorial strategy. In addition to site-saturation mutagenesis, iterative saturation mutagenesis also needed to be performed. The advantages of M-ISM over ISM were that the screening effort is greatly reduced, and the entire M-ISM procedure was less time-consuming. The M-ISM strategy was successfully applied to the randomization of halohydrin dehalogenase from Agrobacterium radiobacter AD1 (HheC) when five interesting sites were targeted simultaneously. After screening 900 clones in total, six positive mutants were obtained. These mutants exhibited 4.0- to 9.3-fold higher k(cat) values than did the wild-type HheC toward 1,3-dichloro-2-propanol. However, with the ISM strategy, the best hit showed a 5.9-fold higher k(cat) value toward 1,3-DCP than the wild-type HheC, which was obtained after screening 4000 clones from four rounds of mutagenesis. Therefore, M-ISM could serve as a simple and efficient version of ISM for the randomization of target genes with multiple positions of interest.

  14. The evolution of different maternal investment strategies in two closely related desert vertebrates

    Science.gov (United States)

    Ennen, Joshua R.; Lovich, Jeffrey E.; Averill-Murray, Roy C.; Yackulic, Charles B.; Agha, Mickey; Loughran, Caleb; Tennant, Laura A.; Sinervo, Barry

    2017-01-01

    We compared egg size phenotypes and tested several predictions from the optimal egg size (OES) and bet-hedging theories in two North American desert-dwelling sister tortoise taxa, Gopherus agassizii and G. morafkai, that inhabit different climate spaces: relatively unpredictable and more predictable climate spaces, respectively. Observed patterns in both species differed from the predictions of OES in several ways. Mean egg size increased with maternal body size in both species. Mean egg size was inversely related to clutch order in G. agassizii, a strategy more consistent with the within-generation hypothesis arising out of bet-hedging theory or a constraint in egg investment due to resource availability, and contrary to theories of density dependence, which posit that increasing hatchling competition from later season clutches should drive selection for larger eggs. We provide empirical evidence that one species, G. agassizii, employs a bet-hedging strategy that is a combination of two different bet-hedging hypotheses. Additionally, we found some evidence for G. morafkai employing a conservative bet-hedging strategy. (e.g., lack of intra- and interclutch variation in egg size relative to body size). Our novel adaptive hypothesis suggests the possibility that natural selection favors smaller offspring in late-season clutches because they experience a more benign environment or less energetically challenging environmental conditions (i.e., winter) than early clutch progeny, that emerge under harsher and more energetically challenging environmental conditions (i.e., summer). We also discuss alternative hypotheses of sexually antagonistic selection, which arise from the trade-offs of son versus daughter production that might have different optima depending on clutch order and variation in temperature-dependent sex determination (TSD) among clutches. Resolution of these hypotheses will require long-term data on fitness of sons versus daughters as a function of

  15. Sources Of Evolution Of The Japan Air Self Defense Force’s Strategy

    Science.gov (United States)

    2016-12-01

    national security. • Second is the battle space of war. “Defensive defense” strategy conducts war definitely in its territory and bears that most of its...russia-in- 2016/. 271Ankit Panda , “East China Sea: Japan Reacts as Chinese Air Force Conducts Miyako Strait Drill,” Diplomat, September 26, 2016...June 22, 2016. http://www.voanews.com/a/north-korea- failed-missile-tests-show-real-progress/3386692.html. Panda , Ankit. “East China Sea: Japan Reacts

  16. Social Mobile Marketing: Evolution of Communication Strategies in the Web 2.0 Era

    Directory of Open Access Journals (Sweden)

    Stefano Franco

    2014-07-01

    Full Text Available Increasingly faster communicational streams - that ease interactions and allow agents to considerably enhance their own informational assets - characterize the era in which we live. The research about new media, mobile and social technologies is the driver of this changes that implements a revolution of the content management, of the information accessibility and of the relationships interactivity. These characteristics don’t leave the agents unresponsive and it is interesting and fitting to understand the tools available to firms and institutions and the communicational and marketing policies that organizations put to use to achieve their goals. In this context we want to find strategic and operational models to support organizations decisions about markets and territories. The purpose of this article is to understand how small organizations can utilize networks that characterize new trends in marketing. We conclude by providing some thoughts on the future evolution of the research in this field also with reference to the smart city that can exploit social mobile marketing for promotion of the territory and social participation.I

  17. Bacteria between protists and phages: from antipredation strategies to the evolution of pathogenicity.

    Science.gov (United States)

    Brüssow, Harald

    2007-08-01

    Bacteriophages and protists are major causes of bacterial mortality. Genomics suggests that phages evolved well before eukaryotic protists. Bacteria were thus initially only confronted with phage predators. When protists evolved, bacteria were caught between two types of predators. One successful antigrazing strategy of bacteria was the elaboration of toxins that would kill the grazer. The released cell content would feed bystander bacteria. I suggest here that, to fight grazing protists, bacteria teamed up with those phage predators that concluded at least a temporary truce with them in the form of lysogeny. Lysogeny was perhaps initially a resource management strategy of phages that could not maintain infection chains. Subsequently, lysogeny might have evolved into a bacterium-prophage coalition attacking protists, which became a food source for them. When protists evolved into multicellular animals, the lysogenic bacteria tracked their evolving food source. This hypothesis could explain why a frequent scheme of bacterial pathogenicity is the survival in phagocytes, why a significant fraction of bacterial pathogens have prophage-encoded virulence genes, and why some virulence factors of animal pathogens are active against unicellular eukaryotes. Bacterial pathogenicity might thus be one playing option of the stone-scissor-paper game played between phages-bacteria-protists, with humans getting into the crossfire.

  18. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  19. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  20. Multi-omics Analysis Sheds Light on the Evolution and the Intracellular Lifestyle Strategies of Spotted Fever Group Rickettsia spp.

    Directory of Open Access Journals (Sweden)

    Khalid El Karkouri

    2017-07-01

    Full Text Available Arthropod-borne Rickettsia species are obligate intracellular bacteria which are pathogenic for humans. Within this genus, Rickettsia slovaca and Rickettsia conorii cause frequent and potentially severe infections, whereas Rickettsia raoultii and Rickettsia massiliae cause rare and milder infections. All four species belong to spotted fever group (SFG rickettsiae. However, R. slovaca and R. raoultii cause scalp eschar and neck lymphadenopathy (SENLAT and are mainly associated with Dermacentor ticks, whereas the other two species cause Mediterranean spotted fever (MSF and are mainly transmitted by Rhipicephalus ticks. To identify the potential genes and protein profiles and to understand the evolutionary processes that could, comprehensively, relate to the differences in virulence and pathogenicity observed between these four species, we compared their genomes and proteomes. The virulent and milder agents displayed divergent phylogenomic evolution in two major clades, whereas either SENLAT or MSF disease suggests a discrete convergent evolution of one virulent and one milder agent, despite their distant genetic relatedness. Moreover, the two virulent species underwent strong reductive genomic evolution and protein structural variations, as well as a probable loss of plasmid(s, compared to the two milder species. However, an abundance of mobilome genes was observed only in the less pathogenic species. After infecting Xenopus laevis cells, the virulent agents displayed less up-regulated than down-regulated proteins, as well as less number of identified core proteins. Furthermore, their similar and distinct protein profiles did not contain some genes (e.g., ompA/B and rickA known to be related to rickettsial adhesion, motility and/or virulence, but may include other putative virulence-, antivirulence-, and/or disease-related proteins. The identified evolutionary forces herein may have a strong impact on intracellular expressions and strategies in

  1. Multi-omics Analysis Sheds Light on the Evolution and the Intracellular Lifestyle Strategies of Spotted Fever Group Rickettsia spp.

    Science.gov (United States)

    El Karkouri, Khalid; Kowalczewska, Malgorzata; Armstrong, Nicholas; Azza, Said; Fournier, Pierre-Edouard; Raoult, Didier

    2017-01-01

    Arthropod-borne Rickettsia species are obligate intracellular bacteria which are pathogenic for humans. Within this genus, Rickettsia slovaca and Rickettsia conorii cause frequent and potentially severe infections, whereas Rickettsia raoultii and Rickettsia massiliae cause rare and milder infections. All four species belong to spotted fever group (SFG) rickettsiae. However, R. slovaca and R. raoultii cause scalp eschar and neck lymphadenopathy (SENLAT) and are mainly associated with Dermacentor ticks, whereas the other two species cause Mediterranean spotted fever (MSF) and are mainly transmitted by Rhipicephalus ticks. To identify the potential genes and protein profiles and to understand the evolutionary processes that could, comprehensively, relate to the differences in virulence and pathogenicity observed between these four species, we compared their genomes and proteomes. The virulent and milder agents displayed divergent phylogenomic evolution in two major clades, whereas either SENLAT or MSF disease suggests a discrete convergent evolution of one virulent and one milder agent, despite their distant genetic relatedness. Moreover, the two virulent species underwent strong reductive genomic evolution and protein structural variations, as well as a probable loss of plasmid(s), compared to the two milder species. However, an abundance of mobilome genes was observed only in the less pathogenic species. After infecting Xenopus laevis cells, the virulent agents displayed less up-regulated than down-regulated proteins, as well as less number of identified core proteins. Furthermore, their similar and distinct protein profiles did not contain some genes (e.g., ompA/B and rickA) known to be related to rickettsial adhesion, motility and/or virulence, but may include other putative virulence-, antivirulence-, and/or disease-related proteins. The identified evolutionary forces herein may have a strong impact on intracellular expressions and strategies in these rickettsiae

  2. Orogen-parallel variation in exhumation and its influence on critical taper evolution: The case of the Emilia-Romagna Apennine (Italy)

    Science.gov (United States)

    Bonini, Marco

    2018-03-01

    The Northern Apennine prowedge exposes two adjacent sectors showing a marked along-strike change in erosion intensity, namely the Emilia Apennine to the northwest and the Romagna Apennine to the southeast. This setting has resulted from Pliocene erosion (≤5 Ma) and exhumation, which have affected the whole Romagna sector and mostly the watershed ridge in Emilia. Such an evolution has conceivably influenced the equilibrium of this fold-and-thrust belt, which can be evaluated in terms of critical Coulomb wedge theory. The present state of the thrust wedge has been assessed by crosschecking wedge tapers measured along transverse profiles with fluid pressure values inferred from deep wellbores. The interpretation of available data suggests that both Emilia and Romagna are currently overcritical. This condition is compatible with the presence in both sectors of active NE-dipping normal faults, which would work to decrease the surface slope of the orogenic wedge. However, the presence of Late Miocene-Pliocene passive-roof and out-of-sequence thrusts in Romagna may reveal a past undercritical wedge state ensuing during the regional erosion phase, thereby implying that the current overcritical condition would be a recent feature. The setting of the Emilia Apennine (i.e., strong axial exhumation and limited erosion of the prowedge) suggests instead a long lasting overcritical wedge, which was probably contemporaneous with the Pliocene undercritical wedge in Romagna. The reasons for this evolution are still unclear, although they may be linked to lithosphere-scale processes that have promoted the uplift of Romagna relative to Emilia. The lessons from the Northern Apennine thus suggest that erosion and exhumation have the ability to produce marked along-strike changes in the equilibrium of a fold-and-thrust belt.

  3. Improving energy efficiency: Strategies for supporting sustained market evolution in developing and transitioning countries

    Energy Technology Data Exchange (ETDEWEB)

    Meyers, S.

    1998-02-01

    This report presents a framework for considering market-oriented strategies for improving energy efficiency that recognize the conditions of developing and transitioning countries, and the need to strengthen the effectiveness of market forces in delivering greater energy efficiency. It discusses policies that build markets in general, such as economic and energy pricing reforms that encourage competition and increase incentives for market actors to improve the efficiency of their energy use, and measures that reduce the barriers to energy efficiency in specific markets such that improvement evolves in a dynamic, lasting manner. The report emphasizes how different policies and measures support one another and can create a synergy in which the whole is greater than the sum of the parts. In addressing this topic, it draws on the experience with market transformation energy efficiency programs in the US and other industrialized countries.

  4. The Dynamic Evolution of Firms’ Pollution Control Strategy under Graded Reward-Penalty Mechanism

    Directory of Open Access Journals (Sweden)

    Li Ming Chen

    2016-01-01

    Full Text Available The externality of pollution problem makes firms lack enough incentive to reduce pollution emission. Therefore, it is necessary to design a reasonable environmental regulation mechanism so as to effectively urge firms to control pollution. In order to inspire firms to control pollution, we divide firms into different grades according to their pollution level and construct an evolutionary game model to analyze the interaction between government’s regulation and firms’ pollution control under graded reward-penalty mechanism. Then, we discuss stability of firms’ pollution control strategy and derive the condition of inspiring firms to control pollution. Our findings indicate that firms tend to control pollution after long-term repeated games if government’s excitation level and monitoring frequency meet some conditions. Otherwise, firms tend to discharge pollution that exceeds the stipulated standards. As a result, in order to effectively control pollution, a government should adjust its excitation level and monitoring frequency reasonably.

  5. Evolution of flowering strategies in Oenothera glazioviana: an integral projection model approach.

    Science.gov (United States)

    Rees, Mark; Rose, Karen E

    2002-07-22

    The timing of reproduction is a key determinant of fitness. Here, we develop parameterized integral projection models of size-related flowering for the monocarpic perennial Oenothera glazioviana and use these to predict the evolutionarily stable strategy (ESS) for flowering. For the most part there is excellent agreement between the model predictions and the results of quantitative field studies. However, the model predicts a much steeper relationship between plant size and the probability of flowering than observed in the field, indicating selection for a 'threshold size' flowering function. Elasticity and sensitivity analysis of population growth rate lambda and net reproductive rate R(0) are used to identify the critical traits that determine fitness and control the ESS for flowering. Using the fitted model we calculate the fitness landscape for invading genotypes and show that this is characterized by a ridge of approximately equal fitness. The implications of these results for the maintenance of genetic variation are discussed.

  6. 'Desa SIAGA', the 'Alert Village': the evolution of an iconic brand in Indonesian public health strategies.

    Science.gov (United States)

    Hill, Peter S; Goeman, Lieve; Sofiarini, Rahmi; Djara, Maddi M

    2014-07-01

    In 1999, the Ministry of Women's Empowerment in Indonesia worked with advertisers in Jakarta and international technical advisors to develop the concept of 'Suami SIAGA', the 'Alert Husband', confronting Indonesian males with their responsibilities to be aware of their wives' needs and ensure early access if needed to trained obstetrics care. The model was rapidly expanded to apply to the 'Desa SIAGA', the 'Alert Village', with communities assuming the responsibility for awareness of the risks of pregnancy and childbirth, and supporting registered pregnant mothers with funding and transportation for emergency obstetric assistance, and identified blood donors. Based on the participant observation, interviews and documentary analysis, this article uses a systems perspective to trace the evolution of that iconic 'brand' as new national and international actors further developed the concept and its application in provincial and national programmes. In 2010, it underwent a further transformation to become 'Desa Siaga Aktif', a national programme with responsibilities expanded to include the provision of basic health services at village level, and the surveillance of communicable disease, monitoring of lifestyle activities and disaster preparedness, in addition to the management of obstetric emergencies. By tracking the use of this single 'brand', the study provides insights into the complex adaptive system of policy and programme development with its rich interactions between multiple international, national, provincial and sectoral stakeholders, the unpredictable responses to feedback from these actors and their activities and the resultant emergence of new policy elements, new programmes and new levels of operation within the system. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.

  7. Evolution of transoral approaches, endoscopic endonasal approaches, and reduction strategies for treatment of craniovertebral junction pathology: a treatment algorithm update.

    Science.gov (United States)

    Dlouhy, Brian J; Dahdaleh, Nader S; Menezes, Arnold H

    2015-04-01

    The craniovertebral junction (CVJ), or the craniocervical junction (CCJ) as it is otherwise known, houses the crossroads of the CNS and is composed of the occipital bone that surrounds the foramen magnum, the atlas vertebrae, the axis vertebrae, and their associated ligaments and musculature. The musculoskeletal organization of the CVJ is unique and complex, resulting in a wide range of congenital, developmental, and acquired pathology. The refinements of the transoral approach to the CVJ by the senior author (A.H.M.) in the late 1970s revolutionized the treatment of CVJ pathology. At the same time, a physiological approach to CVJ management was adopted at the University of Iowa Hospitals and Clinics in 1977 based on the stability and motion dynamics of the CVJ and the site of encroachment, incorporating the transoral approach for irreducible ventral CVJ pathology. Since then, approaches and techniques to treat ventral CVJ lesions have evolved. In the last 40 years at University of Iowa Hospitals and Clinics, multiple approaches to the CVJ have evolved and a better understanding of CVJ pathology has been established. In addition, new reduction strategies that have diminished the need to perform ventral decompressive approaches have been developed and implemented. In this era of surgical subspecialization, to properly treat complex CVJ pathology, the CVJ specialist must be trained in skull base transoral and endoscopic endonasal approaches, pediatric and adult CVJ spine surgery, and must understand and be able to treat the complex CSF dynamics present in CVJ pathology to provide the appropriate, optimal, and tailored treatment strategy for each individual patient, both child and adult. This is a comprehensive review of the history and evolution of the transoral approaches, extended transoral approaches, endoscopie assisted transoral approaches, endoscopie endonasal approaches, and CVJ reduction strategies. Incorporating these advancements, the authors update the

  8. Evolution of ITER tritium confinement strategy and adaptation to Cadrache site conditions and French regulatory requirements

    International Nuclear Information System (INIS)

    Murdoch, D.

    2007-01-01

    The ITER Nuclear Buildings include the Tokamak, Tritium and Diagnostic Buildings (Tokamak Complex) and the Hot Cell and Low Level Radioactive Waste Buildings. The Tritium Confinement Strategy of the Nuclear Buildings comprises key features of the Atmosphere and Vent Detritiation Systems (ADS/VDS) and the Heating, Ventilation and Air Conditioning (HVAC) Systems. The designs developed during the ITER EDA (Engineering Design Activities) for these systems need to be adapted to the specific conditions of the Cadarache site and modified to conform with the regulatory requirements applicable to Installations Nucleaires de Base (INB) - Basic Nuclear Installations - in France. The highest priority for such adaptation has been identified as the Tritium Confinement of the Tokamak Complex and the progress in development of a robust, coherent design concept compliant with French practice is described in the paper. The Tokamak Complex HVAC concept for generic conditions was developed for operational cost minimisation under more extreme climatic conditions (primarily temperature) than those valid for Cadarache, and incorporated recirculation of a large fraction of the air flow through the HVAC systems to achieve this objective. Due to the impracticality of precluding the spread of contamination from areas of higher activity to less contaminated areas, this concept has been abandoned in favour of a once-through configuration, which requires a complete redesign, with revised air change rates, module sizes, layout, redundancy provisions and other features. The ADS/VDS concept developed for the generic design of the ITER Tokamak Complex is undergoing a radical revision in which the system architecture, module sizing and basic process are being optimised for the Cadarache conditions. Investigation is being launched into the implementation of a wet stripper concept to replace the molecular sieve (MS) beds incorporated in the generic design, where concerns have been raised over low

  9. Removal of antibiotics in a parallel-plate thin-film-photocatalytic reactor: Process modeling and evolution of transformation by-products and toxicity.

    Science.gov (United States)

    Özkal, Can Burak; Frontistis, Zacharias; Antonopoulou, Maria; Konstantinou, Ioannis; Mantzavinos, Dionissios; Meriç, Süreyya

    2017-10-01

    Photocatalytic degradation of sulfamethoxazole (SMX) antibiotic has been studied under recycling batch and homogeneous flow conditions in a thin-film coated immobilized system namely parallel-plate (PPL) reactor. Experimentally designed, statistically evaluated with a factorial design (FD) approach with intent to provide a mathematical model takes into account the parameters influencing process performance. Initial antibiotic concentration, UV energy level, irradiated surface area, water matrix (ultrapure and secondary treated wastewater) and time, were defined as model parameters. A full of 2 5 experimental design was consisted of 32 random experiments. PPL reactor test experiments were carried out in order to set boundary levels for hydraulic, volumetric and defined defined process parameters. TTIP based thin-film with polyethylene glycol+TiO 2 additives were fabricated according to pre-described methodology. Antibiotic degradation was monitored by High Performance Liquid Chromatography analysis while the degradation products were specified by LC-TOF-MS analysis. Acute toxicity of untreated and treated SMX solutions was tested by standard Daphnia magna method. Based on the obtained mathematical model, the response of the immobilized PC system is described with a polynomial equation. The statistically significant positive effects are initial SMX concentration, process time and the combined effect of both, while combined effect of water matrix and irradiated surface area displays an adverse effect on the rate of antibiotic degradation by photocatalytic oxidation. Process efficiency and the validity of the acquired mathematical model was also verified for levofloxacin and cefaclor antibiotics. Immobilized PC degradation in PPL reactor configuration was found capable of providing reduced effluent toxicity by simultaneous degradation of SMX parent compound and TBPs. Copyright © 2017. Published by Elsevier B.V.

  10. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  11. Oil Exploration and Production in Africa since 2014. Evolution of the Key Players and their Strategies

    International Nuclear Information System (INIS)

    Auge, Benjamin

    2018-05-01

    The fall in oil prices, which began in fall 2014, had a significant influence on the strategies of the key players in the oil industry in Africa. The continent's oil sector has experienced deep-reaching changes, ranging from a drop in exploration budgets, the disappearance or sale of weakened companies, the reorganization or pullback of the traditional oil majors, the establishment of new companies and the arrival of state-owned companies attracted by the crisis-induced windfall effects. Yet the crisis has not redefined the geography of African production, whose principal giants are and will continue to be Nigeria, Angola, Algeria and Libya, at least in terms of reserves. Nevertheless, new zones have emerged, in particular due to the risks taken by junior players backed by powerful investment funds that have had faith in the potential of geologists and technical teams formerly employed by the big companies. This is first of all the case in East Africa, for oil, in Uganda and Kenya, along with gas, with Tanzania and Mozambique. Several very significant oil and gas discoveries have been made in a new basin located between Mauritania and Guinea, contributing to its development. However, no single African model has taken shape, as each company has followed its own path in making decisions about acquisition and exploration. Whereas traditional players, such as the Western majors ENI and Total, have continued to invest on this continent that plays a central role in their global production and strategy, some big companies, such as ConocoPhilipps, have quite simply left the field, while others, such as BP and ExxonMobil, have made new high-risk acquisitions. As for the large Asian state-owned companies, China's investments in exploration and production have tended to stagnate (concerning CNPC and Sinopec in particular), while others, such as the Indonesian Pertamina or India's ONGC, have significantly bolstered their presence. The disengagement of the

  12. Evolution of risk assessment strategies for food and feed uses of stacked GM events.

    Science.gov (United States)

    Kramer, Catherine; Brune, Phil; McDonald, Justin; Nesbitt, Monique; Sauve, Alaina; Storck-Weyhermueller, Sabine

    2016-09-01

    Data requirements are not harmonized globally for the regulation of food and feed derived from stacked genetically modified (GM) events, produced by combining individual GM events through conventional breeding. The data required by some regulatory agencies have increased despite the absence of substantiated adverse effects to animals or humans from the consumption of GM crops. Data from studies conducted over a 15-year period for several stacked GM event maize (Zea mays L.) products (Bt11 ×  GA21, Bt11 ×  MIR604, MIR604 ×  GA21, Bt11 ×  MIR604 ×  GA21, Bt11 ×  MIR162 ×  GA21 and Bt11 ×  MIR604 ×  MIR162 ×  GA21), together with their component single events, are presented. These data provide evidence that no substantial changes in composition, protein expression or insert stability have occurred after combining the single events through conventional breeding. An alternative food and feed risk assessment strategy for stacked GM events is suggested based on a problem formulation approach that utilizes (i) the outcome of the single event risk assessments, and (ii) the potential for interactions in the stack, based on an understanding of the mode of action of the transgenes and their products. © 2016 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  13. The Rise and Fall of an Evolutionary Innovation: Contrasting Strategies of Venom Evolution in Ancient and Young Animals.

    Science.gov (United States)

    Sunagar, Kartik; Moran, Yehu

    2015-10-01

    Animal venoms are theorized to evolve under the significant influence of positive Darwinian selection in a chemical arms race scenario, where the evolution of venom resistance in prey and the invention of potent venom in the secreting animal exert reciprocal selection pressures. Venom research to date has mainly focused on evolutionarily younger lineages, such as snakes and cone snails, while mostly neglecting ancient clades (e.g., cnidarians, coleoids, spiders and centipedes). By examining genome, venom-gland transcriptome and sequences from the public repositories, we report the molecular evolutionary regimes of several centipede and spider toxin families, which surprisingly accumulated low-levels of sequence variations, despite their long evolutionary histories. Molecular evolutionary assessment of over 3500 nucleotide sequences from 85 toxin families spanning the breadth of the animal kingdom has unraveled a contrasting evolutionary strategy employed by ancient and evolutionarily young clades. We show that the venoms of ancient lineages remarkably evolve under the heavy constraints of negative selection, while toxin families in lineages that originated relatively recently rapidly diversify under the influence of positive selection. We propose that animal venoms mostly employ a 'two-speed' mode of evolution, where the major influence of diversifying selection accompanies the earlier stages of ecological specialization (e.g., diet and range expansion) in the evolutionary history of the species-the period of expansion, resulting in the rapid diversification of the venom arsenal, followed by longer periods of purifying selection that preserve the potent toxin pharmacopeia-the period of purification and fixation. However, species in the period of purification may re-enter the period of expansion upon experiencing a major shift in ecology or environment. Thus, we highlight for the first time the significant roles of purifying and episodic selections in shaping animal

  14. The Rise and Fall of an Evolutionary Innovation: Contrasting Strategies of Venom Evolution in Ancient and Young Animals.

    Directory of Open Access Journals (Sweden)

    Kartik Sunagar

    2015-10-01

    Full Text Available Animal venoms are theorized to evolve under the significant influence of positive Darwinian selection in a chemical arms race scenario, where the evolution of venom resistance in prey and the invention of potent venom in the secreting animal exert reciprocal selection pressures. Venom research to date has mainly focused on evolutionarily younger lineages, such as snakes and cone snails, while mostly neglecting ancient clades (e.g., cnidarians, coleoids, spiders and centipedes. By examining genome, venom-gland transcriptome and sequences from the public repositories, we report the molecular evolutionary regimes of several centipede and spider toxin families, which surprisingly accumulated low-levels of sequence variations, despite their long evolutionary histories. Molecular evolutionary assessment of over 3500 nucleotide sequences from 85 toxin families spanning the breadth of the animal kingdom has unraveled a contrasting evolutionary strategy employed by ancient and evolutionarily young clades. We show that the venoms of ancient lineages remarkably evolve under the heavy constraints of negative selection, while toxin families in lineages that originated relatively recently rapidly diversify under the influence of positive selection. We propose that animal venoms mostly employ a 'two-speed' mode of evolution, where the major influence of diversifying selection accompanies the earlier stages of ecological specialization (e.g., diet and range expansion in the evolutionary history of the species-the period of expansion, resulting in the rapid diversification of the venom arsenal, followed by longer periods of purifying selection that preserve the potent toxin pharmacopeia-the period of purification and fixation. However, species in the period of purification may re-enter the period of expansion upon experiencing a major shift in ecology or environment. Thus, we highlight for the first time the significant roles of purifying and episodic selections

  15. The Evolution of the Internet Community and the"Yet-to-Evolve" Smart Grid Community: Parallels and Lessons-to-be-Learned

    Energy Technology Data Exchange (ETDEWEB)

    McParland, Charles

    2009-11-06

    The Smart Grid envisions a transformed US power distribution grid that enables communicating devices, under human supervision, to moderate loads and increase overall system stability and security. This vision explicitly promotes increased participation from a community that, in the past, has had little involvement in power grid operations -the consumer. The potential size of this new community and its member's extensive experience with the public Internet prompts an analysis of the evolution and current state of the Internet as a predictor for best practices in the architectural design of certain portions of the Smart Grid network. Although still evolving, the vision of the Smart Grid is that of a community of communicating and cooperating energy related devices that can be directed to route power and modulate loads in pursuit of an integrated, efficient and secure electrical power grid. The remaking of the present power grid into the Smart Grid is considered as fundamentally transformative as previous developments such as modern computing technology and high bandwidth data communications. However, unlike these earlier developments, which relied on the discovery of critical new technologies (e.g. the transistor or optical fiber transmission lines), the technologies required for the Smart Grid currently exist and, in many cases, are already widely deployed. In contrast to other examples of technical transformations, the path (and success) of the Smart Grid will be determined not by its technology, but by its system architecture. Fortunately, we have a recent example of a transformative force of similar scope that shares a fundamental dependence on our existing communications infrastructure - namely, the Internet. We will explore several ways in which the scale of the Internet and expectations of its users have shaped the present Internet environment. As the presence of consumers within the Smart Grid increases, some experiences from the early growth of the

  16. Allogeneic Stem Cell Transplant for Acute Myeloid Leukemia: Evolution of an Effective Strategy in India

    Directory of Open Access Journals (Sweden)

    Abhijeet Ganapule

    2017-12-01

    Full Text Available Purpose: There are limited data from developing countries on the role and cost-effectiveness of allogeneic stem cell transplantation (allo-SCT for patients with acute myeloid leukemia (AML. Patients and Methods: We undertook a retrospective descriptive study of all patients with AML who underwent allo-SCT from 1994 to 2013 at our center to evaluate the clinical outcomes and cost-effectiveness of this therapeutic modality. Results: Two hundred fifty-four consecutive patients, median age 34 years, who underwent allo-SCT at our center were included in this study. There were 161 males (63.4%. The 5-year overall survival (OS and event-free survival for the entire cohort was 40.1 ± 3.5% and 38.7 ± 3.4%, respectively. The 5-year OS for patients in first (CR1, second, and third complete remission and with disease/refractory AML was 53.1 ± 5.2%, 48.2 ± 8.3%, 31.2 ± 17.8%, and 16.0 ± 4.4%, respectively (P < .001. From 2007, reduced intensity conditioning (RIC with fludarabine and melphalan (Flu/Mel was used in a majority of patients in CR1 (n = 67. Clinical outcomes were compared with historical conventional myeloablative conditioning regimens (n = 38. Use of Flu/Mel was associated with lower treatment-related mortality at 1 year, higher incidence of chronic graft-versus-host-disease, and comparable relapse rates. The 5-year OS and event-free survival for Flu/Mel and myeloablative conditioning group was 67.2 ± 6.6% versus 38.1 ± 8.1% (P = .003 and 63.8 ± 6.4% versus 32.3 ± 7.9% (P = .002, respectively. Preliminary cost analysis suggests that in our medical cost payment system, RIC allo-SCT in CR1 was likely the most cost-effective strategy in the management of AML. Conclusion: In a resource-constrained environment, Flu/Mel RIC allo-SCT for AML CR1 is likely the most efficacious and cost-effective approach in a subset of newly diagnosed young adult patients.

  17. Parallel computing in plasma physics: Nonlinear instabilities

    International Nuclear Information System (INIS)

    Pohn, E.; Kamelander, G.; Shoucri, M.

    2000-01-01

    A Vlasov-Poisson-system is used for studying the time evolution of the charge-separation at a spatial one- as well as a two-dimensional plasma-edge. Ions are advanced in time using the Vlasov-equation. The whole three-dimensional velocity-space is considered leading to very time-consuming four-resp. five-dimensional fully kinetic simulations. In the 1D simulations electrons are assumed to behave adiabatic, i.e. they are Boltzmann-distributed, leading to a nonlinear Poisson-equation. In the 2D simulations a gyro-kinetic approximation is used for the electrons. The plasma is assumed to be initially neutral. The simulations are performed at an equidistant grid. A constant time-step is used for advancing the density-distribution function in time. The time-evolution of the distribution function is performed using a splitting scheme. Each dimension (x, y, υ x , υ y , υ z ) of the phase-space is advanced in time separately. The value of the distribution function for the next time is calculated from the value of an - in general - interstitial point at the present time (fractional shift). One-dimensional cubic-spline interpolation is used for calculating the interstitial function values. After the fractional shifts are performed for each dimension of the phase-space, a whole time-step for advancing the distribution function is finished. Afterwards the charge density is calculated, the Poisson-equation is solved and the electric field is calculated before the next time-step is performed. The fractional shift method sketched above was parallelized for p processors as follows. Considering first the shifts in y-direction, a proper parallelization strategy is to split the grid into p disjoint υ z -slices, which are sub-grids, each containing a different 1/p-th part of the υ z range but the whole range of all other dimensions. Each processor is responsible for performing the y-shifts on a different slice, which can be done in parallel without any communication between

  18. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  19. a Predator-Prey Model Based on the Fully Parallel Cellular Automata

    Science.gov (United States)

    He, Mingfeng; Ruan, Hongbo; Yu, Changliang

    We presented a predator-prey lattice model containing moveable wolves and sheep, which are characterized by Penna double bit strings. Sexual reproduction and child-care strategies are considered. To implement this model in an efficient way, we build a fully parallel Cellular Automata based on a new definition of the neighborhood. We show the roles played by the initial densities of the populations, the mutation rate and the linear size of the lattice in the evolution of this model.

  20. Phylogeny and evolution of life-history strategies in the Sycophaginae non-pollinating fig wasps (Hymenoptera, Chalcidoidea

    Directory of Open Access Journals (Sweden)

    Farache Fernando HA

    2011-06-01

    Full Text Available Abstract Background Non-pollinating Sycophaginae (Hymenoptera, Chalcidoidea form small communities within Urostigma and Sycomorus fig trees. The species show differences in galling habits and exhibit apterous, winged or dimorphic males. The large gall inducers oviposit early in syconium development and lay few eggs; the small gall inducers lay more eggs soon after pollination; the ostiolar gall-inducers enter the syconium to oviposit and the cleptoparasites oviposit in galls induced by other fig wasps. The systematics of the group remains unclear and only one phylogeny based on limited sampling has been published to date. Here we present an expanded phylogeny for sycophagine fig wasps including about 1.5 times the number of described species. We sequenced mitochondrial and nuclear markers (4.2 kb on 73 species and 145 individuals and conducted maximum likelihood and Bayesian phylogenetic analyses. We then used this phylogeny to reconstruct the evolution of Sycophaginae life-history strategies and test if the presence of winged males and small brood size may be correlated. Results The resulting trees are well resolved and strongly supported. With the exception of Apocrytophagus, which is paraphyletic with respect to Sycophaga, all genera are monophyletic. The Sycophaginae are divided into three clades: (i Eukoebelea; (ii Pseudidarnes, Anidarnes and Conidarnes and (iii Apocryptophagus, Sycophaga and Idarnes. The ancestral states for galling habits and male morphology remain ambiguous and our reconstructions show that the two traits are evolutionary labile. Conclusions The three main clades could be considered as tribes and we list some morphological characters that define them. The same biologies re-evolved several times independently, which make Sycophaginae an interesting model to test predictions on what factors will canalize the evolution of a particular biology. The ostiolar gall-inducers are the only monophyletic group. In 15 Myr, they

  1. Comparison analysis of superconducting solenoid magnet systems for ECR ion source based on the evolution strategy optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Shao Qing; Lee, Sang Jin [Uiduk University, Gyeongju (Korea, Republic of)

    2015-06-15

    Electron cyclotron resonance (ECR) ion source is an essential component of heavy-ion accelerator. For a given design, the intensities of the highly charged ion beams extracted from the source can be increased by enlarging the physical volume of ECR zone. Several models for ECR ion source were and will be constructed depending on their operating conditions. In this paper three simulation models with 3, 4 and 6 solenoid system were built, but it's not considered anything else except the number of coils. Two groups of optimization analysis are presented, and the evolution strategy (ES) is adopted as an optimization tool which is a technique based on the ideas of mutation, adaptation and annealing. In this research, the volume of ECR zone was calculated approximately, and optimized designs for ECR solenoid magnet system were presented. Firstly it is better to make the volume of ECR zone large to increase the intensity of ion beam under the specific confinement field conditions. At the same time the total volume of superconducting solenoids must be decreased to save material. By considering the volume of ECR zone and the total length of solenoids in each model with different number of coils, the 6 solenoid system represented the highest coil performance. By the way, a certain case, ECR zone volume itself can be essential than the cost. So the maximum ECR zone volume for each solenoid magnet system was calculated respectively with the same size of the plasma chamber and the total magnet space. By comparing the volume of ECR zone, the 6 solenoid system can be also made with the maximum ECR zone volume.

  2. Global Optimal Energy Management Strategy Research for a Plug-In Series-Parallel Hybrid Electric Bus by Using Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Hongwen He

    2013-01-01

    Full Text Available Energy management strategy influences the power performance and fuel economy of plug-in hybrid electric vehicles greatly. To explore the fuel-saving potential of a plug-in hybrid electric bus (PHEB, this paper searched the global optimal energy management strategy using dynamic programming (DP algorithm. Firstly, the simplified backward model of the PHEB was built which is necessary for DP algorithm. Then the torque and speed of engine and the torque of motor were selected as the control variables, and the battery state of charge (SOC was selected as the state variables. The DP solution procedure was listed, and the way was presented to find all possible control variables at every state of each stage in detail. Finally, the appropriate SOC increment is determined after quantizing the state variables, and then the optimal control of long driving distance of a specific driving cycle is replaced with the optimal control of one driving cycle, which reduces the computational time significantly and keeps the precision at the same time. The simulation results show that the fuel economy of the PEHB with the optimal energy management strategy is improved by 53.7% compared with that of the conventional bus, which can be a benchmark for the assessment of other control strategies.

  3. Accelerated cardiovascular magnetic resonance of the mouse heart using self-gated parallel imaging strategies does not compromise accuracy of structural and functional measures

    Directory of Open Access Journals (Sweden)

    Dörries Carola

    2010-07-01

    Full Text Available Abstract Background Self-gated dynamic cardiovascular magnetic resonance (CMR enables non-invasive visualization of the heart and accurate assessment of cardiac function in mouse models of human disease. However, self-gated CMR requires the acquisition of large datasets to ensure accurate and artifact-free reconstruction of cardiac cines and is therefore hampered by long acquisition times putting high demands on the physiological stability of the animal. For this reason, we evaluated the feasibility of accelerating the data collection using the parallel imaging technique SENSE with respect to both anatomical definition and cardiac function quantification. Results Findings obtained from accelerated data sets were compared to fully sampled reference data. Our results revealed only minor differences in image quality of short- and long-axis cardiac cines: small anatomical structures (papillary muscles and the aortic valve and left-ventricular (LV remodeling after myocardial infarction (MI were accurately detected even for 3-fold accelerated data acquisition using a four-element phased array coil. Quantitative analysis of LV cardiac function (end-diastolic volume (EDV, end-systolic volume (ESV, stroke volume (SV, ejection fraction (EF and LV mass in healthy and infarcted animals revealed no substantial deviations from reference (fully sampled data for all investigated acceleration factors with deviations ranging from 2% to 6% in healthy animals and from 2% to 8% in infarcted mice for the highest acceleration factor of 3.0. CNR calculations performed between LV myocardial wall and LV cavity revealed a maximum CNR decrease of 50% for the 3-fold accelerated data acquisition when compared to the fully-sampled acquisition. Conclusions We have demonstrated the feasibility of accelerated self-gated retrospective CMR in mice using the parallel imaging technique SENSE. The proposed method led to considerably reduced acquisition times, while preserving high

  4. Fundamental Dimensions of Environmental Risk : The Impact of Harsh versus Unpredictable Environments on the Evolution and Development of Life History Strategies.

    Science.gov (United States)

    Ellis, Bruce J; Figueredo, Aurelio José; Brumbach, Barbara H; Schlomer, Gabriel L

    2009-06-01

    The current paper synthesizes theory and data from the field of life history (LH) evolution to advance a new developmental theory of variation in human LH strategies. The theory posits that clusters of correlated LH traits (e.g., timing of puberty, age at sexual debut and first birth, parental investment strategies) lie on a slow-to-fast continuum; that harshness (externally caused levels of morbidity-mortality) and unpredictability (spatial-temporal variation in harshness) are the most fundamental environmental influences on the evolution and development of LH strategies; and that these influences depend on population densities and related levels of intraspecific competition and resource scarcity, on age schedules of mortality, on the sensitivity of morbidity-mortality to the organism's resource-allocation decisions, and on the extent to which environmental fluctuations affect individuals versus populations over short versus long timescales. These interrelated factors operate at evolutionary and developmental levels and should be distinguished because they exert distinctive effects on LH traits and are hierarchically operative in terms of primacy of influence. Although converging lines of evidence support core assumptions of the theory, many questions remain unanswered. This review demonstrates the value of applying a multilevel evolutionary-developmental approach to the analysis of a central feature of human phenotypic variation: LH strategy.

  5. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  6. Engineering-Based Thermal CFD Simulations on Massive Parallel Systems

    KAUST Repository

    Frisch, Jé rô me; Mundani, Ralf-Peter; Rank, Ernst; van Treeck, Christoph

    2015-01-01

    The development of parallel Computational Fluid Dynamics (CFD) codes is a challenging task that entails efficient parallelization concepts and strategies in order to achieve good scalability values when running those codes on modern supercomputers

  7. Experimental evolution in biofilm populations

    Science.gov (United States)

    Steenackers, Hans P.; Parijs, Ilse; Foster, Kevin R.; Vanderleyden, Jozef

    2016-01-01

    Biofilms are a major form of microbial life in which cells form dense surface associated communities that can persist for many generations. The long-life of biofilm communities means that they can be strongly shaped by evolutionary processes. Here, we review the experimental study of evolution in biofilm communities. We first provide an overview of the different experimental models used to study biofilm evolution and their associated advantages and disadvantages. We then illustrate the vast amount of diversification observed during biofilm evolution, and we discuss (i) potential ecological and evolutionary processes behind the observed diversification, (ii) recent insights into the genetics of adaptive diversification, (iii) the striking degree of parallelism between evolution experiments and real-life biofilms and (iv) potential consequences of diversification. In the second part, we discuss the insights provided by evolution experiments in how biofilm growth and structure can promote cooperative phenotypes. Overall, our analysis points to an important role of biofilm diversification and cooperation in bacterial survival and productivity. Deeper understanding of both processes is of key importance to design improved antimicrobial strategies and diagnostic techniques. PMID:26895713

  8. New treatment strategy against advanced rectal cancer. Enzyme-targeting and radio-sensitization treatment under parallel use of TS-1

    International Nuclear Information System (INIS)

    Obata, Shiro; Yamanishi, Mikio; Katsumi, Shingo

    2015-01-01

    Preoperative chemoradiotherapy was applied to two cases of advanced rectal cancer. In addition, radiation sensitizers were injected to the lesion endoscopically at a pace of twice a week in order to enhance therapeutic effects (so-called enzyme-targeting and radio-sensitization treatment: KORTUC [Kochi Oxydol Radio-sensitization Treatment for Unresectable Carcinomas]). The flattening of the lesion shape was observed for both cases in a short period of time, then, Mile's and lateral lymphnode dissection was performed. The remnant of lesion was not pointed out in postoperative pathological specimens for both cases, and histological judgment after the treatment was ranked as Grade 3. In light of the better-than-expected results, this hospital is preparing for clinical trials, and planning to carefully accumulate the cases. As one of the curative treatment strategies against advanced rectal cancer, the authors are willing to make this KORTUC more objectively reliable as a safe and minimally invasive therapy. (A.O.)

  9. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  10. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  11. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  12. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  13. Parallel programming with Easy Java Simulations

    Science.gov (United States)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  14. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  15. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  16. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  18. Evolution of Various Library Instruction Strategies: Using Student Feedback to Create and Enhance Online Active Learning Assignments

    Directory of Open Access Journals (Sweden)

    Marcie Lynne Jacklin

    2013-06-01

    Full Text Available This case study traces the evolution of library assignments for biological science students from paper-based workbooks in a blended (hands-on workshop to blended learning workshops using online assignments to online active learning modules which are stand-alone without any face-to-face instruction. As the assignments evolved to adapt to online learning supporting materials in the form of PDFs (portable document format, screen captures and screencasting were embedded into the questions as teaching moments to replace face-to-face instruction. Many aspects of the evolution of the assignment were based on student feedback from evaluations, input from senior lab demonstrators and teaching assistants, and statistical analysis of the students’ performance on the assignment. Advantages and disadvantages of paper-based and online assignments are discussed. An important factor for successful online learning may be the ability to get assistance.

  19. Genetic algorithm-based optimization of testing and maintenance under uncertain unavailability and cost estimation: A survey of strategies for harmonizing evolution and accuracy

    International Nuclear Information System (INIS)

    Villanueva, J.F.; Sanchez, A.I.; Carlos, S.; Martorell, S.

    2008-01-01

    This paper presents the results of a survey to show the applicability of an approach based on a combination of distribution-free tolerance interval and genetic algorithms for testing and maintenance optimization of safety-related systems based on unavailability and cost estimation acting as uncertain decision criteria. Several strategies have been checked using a combination of Monte Carlo (simulation)--genetic algorithm (search-evolution). Tolerance intervals for the unavailability and cost estimation are obtained to be used by the genetic algorithms. Both single- and multiple-objective genetic algorithms are used. In general, it is shown that the approach is a robust, fast and powerful tool that performs very favorably in the face of noise in the output (i.e. uncertainty) and it is able to find the optimum over a complicated, high-dimensional nonlinear space in a tiny fraction of the time required for enumeration of the decision space. This approach reduces the computational effort by means of providing appropriate balance between accuracy of simulation and evolution; however, negative effects are also shown when a not well-balanced accuracy-evolution couple is used, which can be avoided or mitigated with the use of a single-objective genetic algorithm or the use of a multiple-objective genetic algorithm with additional statistical information

  20. Car firms and low-emission vehicles: The evolution of incumbents’ strategies in relation to policy developments

    NARCIS (Netherlands)

    Bohnsack, R.

    2013-01-01

    This dissertation explores the developments in the international car industry from 1997 to 2010 in relation to low-emission vehicles, with specific attention to electric vehicles. More specifically, the study seeks to better understand strategies of car manufacturers and the interplay of

  1. The Cook, the Thief, his Wife and her Lovert : on the evolution of the human reproductive strategy

    NARCIS (Netherlands)

    Schuiling, GA

    2003-01-01

    Human reproductive strategy differs from that of most other mammals, including Apes such as the closely related chimpanzee (Pan troglodytes) and the bonobo (Pan paniscus). For example, humans, although basically polygamic, exhibit a strong tendency to (serial) monogamy and-very rare for a

  2. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  3. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  4. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  5. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  6. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  7. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  8. The Development of Low-Cost Airlines and Tourism as a Competitiveness Complementor: Effects, Evolution and Strategies

    Directory of Open Access Journals (Sweden)

    Luis Moreno

    2015-12-01

    Full Text Available This paper addresses the relationship between the development of the airline industry and tourism. On the one hand, air transport has triggered the growth of tourism throughout the world, while, on the other hand, tourism has acted as a complementary product for developing new flight routes. This process has intensified with the emergence of low-cost carriers. A profound change has been observed in companies’ strategy to adapt to the demands of this type of market. To conduct this study, a review of the existing literature related to tourism and lowcost carriers was carried out. To conclude, an analysis of the positioning and price-fixing strategies of low-cost airlines operating on some of the most important tourist routes in Europe was performed. The results indicate different level of fares among the five companies in the sample, especially between Ryanair and easyJet, but similar pricing behaviour on the routes studied.

  9. Cellular scanning strategy for selective laser melting: Evolution of optimal grid-based scanning path & parametric approach to thermal homogeneity

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Tutum, Cem Celal; Hattel, Jesper Henri

    2013-01-01

    Selective laser melting, as a rapid manufacturing technology, is uniquely poised to enforce a paradigm shift in the manufacturing industry by eliminating the gap between job- and batch-production techniques. Products from this process, however, tend to show an increased amount of defects such as ...... strategy has been developed for processing the standard sample, one unit cell at a time, using genetic algorithms, with an objective of reducing thermal asymmetries. © 2013 SPIE....

  10. Evolution of Rural Livelihood Strategies in a Remote Sino-Mongolian Border Area: A Cross-Country Analysis

    Directory of Open Access Journals (Sweden)

    Munkhnasan Tsvegemed

    2018-03-01

    Full Text Available Ecologically sound natural resources management is still the backbone of rural livelihoods in many regions of the world. The Altai-Dzungarian region between China and Mongolia constitutes an ideal site to study how political, economic, infrastructural, and cultural differences affect rural livelihoods. Structured semi-quantitative interviews were conducted with 483 households on both sides to characterise their current livelihood strategies and assess the importance of the various activities for the households’ current socio-economic situation by means of the categorical principal component and two-step cluster analysis. In total, four livelihood clusters were identified across both regions, whereby one cluster was only present in Mongolia. In general, all clusters mirrored the transition from almost pure pastoralist to agro-pastoralist livelihood strategies. While animal husbandry was more common in Mongolia and crop farming more common in China, most households in both countries pursued a rather mixed approach. The composition of the herds, as well as the richness and diversity of the livestock species, differed significantly between the countries and was generally higher in Mongolia. Supplementary feedstuff and pesticide and fertiliser use were higher in China, along with diversification of produces. Our analysis indicates that until very recently the livelihood strategies on both sides of the border were the same, manifesting in the fact that we can define three identical clusters across countries (environment factor even though there are slight differences in land, livestock and asset endowment.

  11. Estrutura e estratégia: evolução de paradigmas Structure and strategy: evolution of paradigms

    Directory of Open Access Journals (Sweden)

    Fernando Carvalho de Almeida

    2006-06-01

    Full Text Available O presente trabalho analisa as relações de interdependência existentes entre a estrutura da empresa e as estratégias a serem implementadas. É feita uma breve revisão da literatura sobre estratégia, administração estratégica e as diversas formas de conduzir a implementação das que se apresentam como as mais adequadas à consecução dos objetivos organizacionais. Foram analisados os modelos tradicionais de estrutura organizacional, bem como as vantagens e desvantagens estratégicas que cada um apresenta. Por fim, foram estudadas as abordagens de diferentes autores sobre as relações de interdependência entre a estrutura da empresa e as suas estratégias. Em conclusão, verifica-se que, embora as estratégias da empresa possam ser fixadas a partir da análise da sua estrutura organizacional, dos seus pontos fortes e fracos, esta atitude nem sempre é viável. Numa época em que as tecnologias evoluem de forma extremamente rápida e a competitividade se acelera em nível global, tornam-se necessárias estruturas organizacionais flexíveis, que possam modificar-se rapidamente, para atender às estratégias capazes de permitir a adequada inserção da empresa em um ambiente externo extremamente volátil.The interdependent relations that exist between the structure of an organization and the organizational strategies to be implemented were analyzed. A review was made of literature on strategy, strategic management and methods of implementing strategies appropriate for organization objectives. Traditional models of organization structures were then analyzed as well as the inherent strategic advantages and disadvantages. Then the approach of various authors to these interdependent relations was studied. It was concluded that although organizational strategy can be established by analyzing organization structure as well as the strong and weak aspects involved, this is not always feasible. With fast paced technologies and sharpening global

  12. Synchronization Of Parallel Discrete Event Simulations

    Science.gov (United States)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  13. Language constructs for modular parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  14. Argentina's experience with parallel exchange markets: 1981-1990

    OpenAIRE

    Steven B. Kamin

    1991-01-01

    This paper surveys the development and operation of the parallel exchange market in Argentina during the 1980s, and evaluates its impact upon macroeconomic performance and policy. The historical evolution of Argentina's exchange market policies is reviewed in order to understand the government's motives for imposing exchange controls. The parallel exchange market engendered by these controls is then analyzed, and econometric methods are used to evaluate the behavior of the parallel exchange r...

  15. Development of a model for optimisation of a power plant mix by means of evolution strategy; Modellentwicklung zur Kraftwerksparkoptimierung mit Hilfe von Evolutionsstrategien

    Energy Technology Data Exchange (ETDEWEB)

    Roth, Hans

    2008-09-17

    Within the scope of this thesis a model based on evolution strategy is depicted, which optimises the upgrade of an existing power plant mix. In doing so the optimisation problem is divided in two sections covering the building of new power plants as well as their ideal usage within the persisting power plant mix. The building of new power plants is optimised by means of mutations, while their ideal usage is specified by a heuristic classification according to the merit order of the power plant mix. By applying a residual yearly load curve the consumer load can be modelled, incorporating the impact of fluctuating power generation and its probability of occurrence. Power plant failures and the duration of revisions are adequately considered by means of a power reduction factor. The optimisation furthermore accommodates a limiting threshold for yearly carbon dioxide emissions as well as a premature decommissioning of power plants. (orig.)

  16. Five-year evolution of reperfusion strategies and early mortality in patients with ST-segment elevation myocardial infarction in France.

    Science.gov (United States)

    El Khoury, Carlos; Bochaton, Thomas; Flocard, Elodie; Serre, Patrice; Tomasevic, Danka; Mewton, Nathan; Bonnefoy-Cudraz, Eric

    2017-10-01

    To assess 5-year evolutions in reperfusion strategies and early mortality in patients with ST-segment elevation myocardial infarction. Using data from the French RESCUe network, we studied patients with ST-segment elevation myocardial infarction treated in mobile intensive care units between 2009 and 2013. Among 2418 patients (median age 62 years; 78.5% male), 2119 (87.6%) underwent primary percutaneous coronary intervention and 299 (12.4%) pre-hospital thrombolysis (94.0% of whom went on to undergo percutaneous coronary intervention). Use of primary percutaneous coronary intervention increased from 78.4% in 2009 to 95.9% in 2013 ( P trend 90 minutes delay group (83.0% in 2009 to 97.7% in 2013; P trend <0.001 versus 34.1% in 2009 to 79.2% in 2013; P trend <0.001). In-hospital (4-6%) and 30-day (6-8%) mortalities remained stable from 2009 to 2013. In the RESCUe network, the use of primary percutaneous coronary intervention increased from 2009 to 2013, in line with guidelines, but there was no evolution in early mortality.

  17. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  18. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  19. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  20. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  1. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  2. Extreme longevity in a deep-sea vestimentiferan tubeworm and its implications for the evolution of life history strategies

    Science.gov (United States)

    Durkin, Alanna; Fisher, Charles R.; Cordes, Erik E.

    2017-08-01

    The deep sea is home to many species that have longer life spans than their shallow-water counterparts. This trend is primarily related to the decline in metabolic rates with temperature as depth increases. However, at bathyal depths, the cold-seep vestimentiferan tubeworm species Lamellibrachia luymesi and Seepiophila jonesi reach extremely old ages beyond what is predicted by the simple scaling of life span with body size and temperature. Here, we use individual-based models based on in situ growth rates to show that another species of cold-seep tubeworm found in the Gulf of Mexico, Escarpia laminata, also has an extraordinarily long life span, regularly achieving ages of 100-200 years with some individuals older than 300 years. The distribution of results from individual simulations as well as whole population simulations involving mortality and recruitment rates support these age estimates. The low 0.67% mortality rate measurements from collected populations of E. laminata are similar to mortality rates in L. luymesi and S. jonesi and play a role in evolution of the long life span of cold-seep tubeworms. These results support longevity theory, which states that in the absence of extrinsic mortality threats, natural selection will select for individuals that senesce slower and reproduce continually into their old age.

  3. Aerodynamic Shape Optimization Using Hybridized Differential Evolution

    Science.gov (United States)

    Madavan, Nateri K.

    2003-01-01

    An aerodynamic shape optimization method that uses an evolutionary algorithm known at Differential Evolution (DE) in conjunction with various hybridization strategies is described. DE is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Various hybridization strategies for DE are explored, including the use of neural networks as well as traditional local search methods. A Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the hybrid DE optimizer. The method is implemented on distributed parallel computers so that new designs can be obtained within reasonable turnaround times. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. (The final paper will include at least one other aerodynamic design application). The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated.

  4. Adaptive social learning strategies in temporally and spatially varying environments : how temporal vs. spatial variation, number of cultural traits, and costs of learning influence the evolution of conformist-biased transmission, payoff-biased transmission, and individual learning.

    Science.gov (United States)

    Nakahashi, Wataru; Wakano, Joe Yuichiro; Henrich, Joseph

    2012-12-01

    Long before the origins of agriculture human ancestors had expanded across the globe into an immense variety of environments, from Australian deserts to Siberian tundra. Survival in these environments did not principally depend on genetic adaptations, but instead on evolved learning strategies that permitted the assembly of locally adaptive behavioral repertoires. To develop hypotheses about these learning strategies, we have modeled the evolution of learning strategies to assess what conditions and constraints favor which kinds of strategies. To build on prior work, we focus on clarifying how spatial variability, temporal variability, and the number of cultural traits influence the evolution of four types of strategies: (1) individual learning, (2) unbiased social learning, (3) payoff-biased social learning, and (4) conformist transmission. Using a combination of analytic and simulation methods, we show that spatial-but not temporal-variation strongly favors the emergence of conformist transmission. This effect intensifies when migration rates are relatively high and individual learning is costly. We also show that increasing the number of cultural traits above two favors the evolution of conformist transmission, which suggests that the assumption of only two traits in many models has been conservative. We close by discussing how (1) spatial variability represents only one way of introducing the low-level, nonadaptive phenotypic trait variation that so favors conformist transmission, the other obvious way being learning errors, and (2) our findings apply to the evolution of conformist transmission in social interactions. Throughout we emphasize how our models generate empirical predictions suitable for laboratory testing.

  5. Evolution of a strategy for preparing bioactive small molecules by sequential multicomponent assembly processes, cyclizations, and diversification.

    Science.gov (United States)

    Sahn, James J; Granger, Brett A; Martin, Stephen F

    2014-10-21

    A strategy for generating diverse collections of small molecules has been developed that features a multicomponent assembly process (MCAP) to efficiently construct a variety of intermediates possessing an aryl aminomethyl subunit. These key compounds are then transformed via selective ring-forming reactions into heterocyclic scaffolds, each of which possesses suitable functional handles for further derivatizations and palladium-catalyzed cross coupling reactions. The modular nature of this approach enables the facile construction of libraries of polycyclic compounds bearing a broad range of substituents and substitution patterns for biological evaluation. Screening of several compound libraries thus produced has revealed a large subset of compounds that exhibit a broad spectrum of medicinally-relevant activities.

  6. The strategies of European energy operators. Which strategic and capitalistic evolutions for the sector on a medium term?

    International Nuclear Information System (INIS)

    2012-11-01

    This article presents the content of a market study which aimed at describing the current regulatory context of energy markets in Europe and the degree of openness to competition, at analysing all figures concerning electricity and natural gas in different countries (production, consumption, balance of trade), at analysing development strategies of electricity providers and gas operators and at assessing their strengths and weaknesses, at comparing financial performance of leader groups and at assessing their financial flexibility, and at anticipating the reconfiguration of the sector on a medium term. Fifteen energy companies or operators have been analysed: Centrica, CEZ, E.ON, EDF, EDP, Enel, ENI, Fortum, Gas Natural, GDF Suez, Iberdrola, RWE, SSE, Vattenfall, Verbund

  7. Discrepant longitudinal volumetric and metabolic evolution of diffuse intrinsic Pontine gliomas during treatment: implications for current response assessment strategies

    Energy Technology Data Exchange (ETDEWEB)

    Loebel, U. [University Medical Center Hamburg-Eppendorf, Department of Diagnostic and Interventional Neuroradiology, Hamburg (Germany); St. Jude Children' s Research Hospital, Department of Diagnostic Imaging, Memphis, TN (United States); Hwang, S.; Edwards, A.; Patay, Z. [St. Jude Children' s Research Hospital, Department of Diagnostic Imaging, Memphis, TN (United States); Li, Y.; Li, X. [St. Jude Children' s Research Hospital, Department of Biostatistics, Memphis, TN (United States); Broniscer, A. [St. Jude Children' s Research Hospital, Department of Oncology, Memphis, TN (United States); University of Tennessee Health Science Center, Department of Pediatrics, Memphis, TN (United States)

    2016-10-15

    Based on clinical observations, we hypothesized that in infiltrative high-grade brainstem neoplasms, such as diffuse intrinsic pontine glioma (DIPG), longitudinal metabolic evaluation of the tumor by magnetic resonance spectroscopy (MRS) may be more accurate than volumetric data for monitoring the tumor's biological evolution during standard treatment. We evaluated longitudinal MRS data and corresponding tumor volumes of 31 children with DIPG. We statistically analyzed correlations between tumor volume and ratios of Cho/NAA, Cho/Cr, and NAA/Cr at key time points during the course of the disease through the end of the progression-free survival period. By the end of RT, tumor volume had significantly decreased from the baseline (P <.0001) and remained decreased through the last available follow-up magnetic resonance imaging study (P =.007632). However, the metabolic profile of the tumor tissue (Cho/Cr, NAA/Cr, and Cho/NAA ratios) did not change significantly over time. Our data show that longitudinal tumor volume and metabolic profile changes are dissociated in patients with DIPG during progression-free survival. Volume changes, therefore, may not accurately reflect treatment-related changes in tumor burden. This study adds to the existing body of evidence that the value of conventional MRI metrics, including volumetric data, needs to be reevaluated critically and, in infiltrative tumors in particular, may not be useful as study end-points in clinical trials. We submit that advanced quantitative MRI data, including robust, MRS-based metabolic ratios and diffusion and perfusion metrics, may be better surrogate markers of key end-points in clinical trials. (orig.)

  8. The cook, the thief, his wife and her lover: on the evolution of the human reproductive strategy.

    Science.gov (United States)

    Schuiling, G A

    2003-12-01

    Human reproductive strategy differs from that of most other mammals, including Apes such as the closely related chimpanzee (Pan troglodytes) and the bonobo (Pan paniscus). For example, humans, although basically polygamic, exhibit a strong tendency to (serial) monogamy and--very rare for a mammal--provide biparental care. Moreover, humans are (almost) permanently willing to mate but, in contrast to other species, do so only in private. Unlike chimpanzees and bonobos, the human female exhibits no external signs of ovulation; rather a number of bodily features, e.g. permanently swollen milk glands and the quality of skin and hair, indicate fitness to breed. Human males also exhibit qualities that are rare among mammals: fertile males can be in the company of fertile females without sex being an imperative--although the awareness of sexuality is generally omnipresent. Moreover, unlike most other Apes, human males can cooperate in large groups, in spite of their polygynic inclination and their tendency to compete with each other for access to females. This capacity probably evolved in response to the necessity to acquire food, in particular meat, which was difficult to obtain by a single man. But life in large, complex, multi-male, multi-female groups places great demands on the members' social skills and, to be able to meet these demands, a large, sophisticated brain (neocortex) is needed. Food (and in its wake, cooking) probably forced man to live in ever-larger groups and to evolve the capacity to cooperate. This, in its turn, drove man's present-days psychosocial (emotional and intellectual) make-up. But for this to evolve, an adaptation of reproductive strategy was a conditio sine qua non.

  9. Linguistics: evolution and language change.

    Science.gov (United States)

    Bowern, Claire

    2015-01-05

    Linguists have long identified sound changes that occur in parallel. Now novel research shows how Bayesian modeling can capture complex concerted changes, revealing how evolution of sounds proceeds. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Studies of parallel algorithms for the solution of a Fokker-Planck equation

    International Nuclear Information System (INIS)

    Deck, D.; Samba, G.

    1995-11-01

    The study of laser-created plasmas often requires the use of a kinetic model rather than a hydrodynamic one. This model change occurs, for example, in the hot spot formation in an ICF experiment or during the relaxation of colliding plasmas. When the gradients scalelengths or the size of a given system are not small compared to the characteristic mean-free-path, we have to deal with non-equilibrium situations, which can be described by the distribution functions of every species in the system. We present here a numerical method in plane or spherical 1-D geometry, for the solution of a Fokker-Planck equation that describes the evolution of stich functions in the phase space. The size and the time scale of kinetic simulations require the use of Massively Parallel Computers (MPP). We have adopted a message-passing strategy using Parallel Virtual Machine (PVM)

  11. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  12. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  13. Badlands: A parallel basin and landscape dynamics model

    Directory of Open Access Journals (Sweden)

    T. Salles

    2016-01-01

    Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.

  14. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  15. Teaching and cultural education in the knowledge society. Evolutive analysis of a strategy of collaborative learning in Higher Education

    Directory of Open Access Journals (Sweden)

    Manuela FABBRI

    2011-12-01

    Full Text Available 0 0 1 214 1180 Instituto Universitario de Ciencias de la Educación 9 2 1392 14.0 Normal 0 21 false false false ES JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:ES; mso-fareast-language:EN-US;} This paper discuss in a pedagogical level an experience on the use of Forum as a telematic device. The part of e-learning comprised, together with contents and different kind of exercises, a forum. It was prepared for udergraduated students (third year in Social and Cultural Education (Faculty of Education, University of Bologna. From a brief analysis of the context for the Forum on collaborative learning, authors present a description of the quantitative data from experience, some reflections about the research for techno-social goals, and extract some conclusions from positive elements and limits when using TICs in Higher Education system. From assessment and analysis of the educational process and experience of social formation that develops in the Forum, the authors present an instructional design proposal from the critical and reflective paradigm, after evaluating various comments on the results, related with the strengths and limitations of the instrument in the university context. The conclusions guide the work in the subject of educational technology to not only a reflection of the disciplinary nature focused on the use of ICT, but also an approach to collaborative learning strategies and throughout lifelong learning

  16. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  17. Evidence of Parallel Processing During Translation

    DEFF Research Database (Denmark)

    Balling, Laura Winther; Hvelplund, Kristian Tangsgaard; Sjørup, Annette Camilla

    2014-01-01

    conclude that translation is a parallel process and that literal translation is likely to be a universal initial default strategy in translation. This conclusion is strengthened by the fact that all three experiments were relatively naturalistic, due to the combination of remote eye tracking and mixed...

  18. A Model for Speedup of Parallel Programs

    Science.gov (United States)

    1997-01-01

    Sanjeev. K Setia . The interaction between mem- ory allocation and adaptive partitioning in message- passing multicomputers. In IPPS 󈨣 Workshop on Job...Scheduling Strategies for Parallel Processing, pages 89{99, 1995. [15] Sanjeev K. Setia and Satish K. Tripathi. A compar- ative analysis of static

  19. Ecology and Evolution as Targets: the Need for Novel Eco-Evo Drugs and Strategies To Fight Antibiotic Resistance▿†

    Science.gov (United States)

    Baquero, Fernando; Coque, Teresa M.; de la Cruz, Fernando

    2011-01-01

    In recent years, the explosive spread of antibiotic resistance determinants among pathogenic, commensal, and environmental bacteria has reached a global dimension. Classical measures trying to contain or slow locally the progress of antibiotic resistance in patients on the basis of better antibiotic prescribing policies have clearly become insufficient at the global level. Urgent measures are needed to directly confront the processes influencing antibiotic resistance pollution in the microbiosphere. Recent interdisciplinary research indicates that new eco-evo drugs and strategies, which take ecology and evolution into account, have a promising role in resistance prevention, decontamination, and the eventual restoration of antibiotic susceptibility. This minireview summarizes what is known and what should be further investigated to find drugs and strategies aiming to counteract the “four P's,” penetration, promiscuity, plasticity, and persistence of rapidly spreading bacterial clones, mobile genetic elements, or resistance genes. The term “drug” is used in this eco-evo perspective as a tool to fight resistance that is able to prevent, cure, or decrease potential damage caused by antibiotic resistance, not necessarily only at the individual level (the patient) but also at the ecological and evolutionary levels. This view offers a wealth of research opportunities for science and technology and also represents a large adaptive challenge for regulatory agencies and public health officers. Eco-evo drugs and interventions constitute a new avenue for research that might influence not only antibiotic resistance but the maintenance of a healthy interaction between humans and microbial systems in a rapidly changing biosphere. PMID:21576439

  20. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  1. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  2. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  3. Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors

    Science.gov (United States)

    Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.

    1990-01-01

    Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.

  4. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  5. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  6. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  7. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  8. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  9. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  10. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  11. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  12. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  13. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  14. Evolution strategies for robust optimization

    NARCIS (Netherlands)

    Kruisselbrink, Johannes Willem

    2012-01-01

    Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but

  15. Dataflow Query Execution in a Parallel, Main-memory Environment

    NARCIS (Netherlands)

    Wilschut, A.N.; Apers, Peter M.G.

    In this paper, the performance and characteristics of the execution of various join-trees on a parallel DBMS are studied. The results of this study are a step into the direction of the design of a query optimization strategy that is fit for parallel execution of complex queries. Among others,

  16. Dataflow Query Execution in a Parallel Main-Memory Environment

    NARCIS (Netherlands)

    Wilschut, A.N.; Apers, Peter M.G.

    1991-01-01

    The performance and characteristics of the execution of various join-trees on a parallel DBMS are studied. The results are a step in the direction of the design of a query optimization strategy that is fit for parallel execution of complex queries. Among others, synchronization issues are identified

  17. Prototyping and Simulating Parallel, Distributed Computations with VISA

    National Research Council Canada - National Science Library

    Demeure, Isabelle M; Nutt, Gary J

    1989-01-01

    ...] to support the design, prototyping, and simulation of parallel, distributed computations. In particular, VISA is meant to guide the choice of partitioning and communication strategies for such computations, based on their performance...

  18. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  19. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  20. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  1. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  2. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  3. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  4. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  5. Research on Parallel Three Phase PWM Converters base on RTDS

    Science.gov (United States)

    Xia, Yan; Zou, Jianxiao; Li, Kai; Liu, Jingbo; Tian, Jun

    2018-01-01

    Converters parallel operation can increase capacity of the system, but it may lead to potential zero-sequence circulating current, so the control of circulating current was an important goal in the design of parallel inverters. In this paper, the Real Time Digital Simulator (RTDS) is used to model the converters parallel system in real time and study the circulating current restraining. The equivalent model of two parallel converters and zero-sequence circulating current(ZSCC) were established and analyzed, then a strategy using variable zero vector control was proposed to suppress the circulating current. For two parallel modular converters, hardware-in-the-loop(HIL) study based on RTDS and practical experiment were implemented, results prove that the proposed control strategy is feasible and effective.

  6. Implementation of a parallel version of a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Gerstengarbe, F.W. [ed.; Kuecken, M. [Potsdam-Institut fuer Klimafolgenforschung (PIK), Potsdam (Germany); Schaettler, U. [Deutscher Wetterdienst, Offenbach am Main (Germany). Geschaeftsbereich Forschung und Entwicklung

    1997-10-01

    A regional climate model developed by the Max Planck Institute for Meterology and the German Climate Computing Centre in Hamburg based on the `Europa` and `Deutschland` models of the German Weather Service has been parallelized and implemented on the IBM RS/6000 SP computer system of the Potsdam Institute for Climate Impact Research including parallel input/output processing, the explicit Eulerian time-step, the semi-implicit corrections, the normal-mode initialization and the physical parameterizations of the German Weather Service. The implementation utilizes Fortran 90 and the Message Passing Interface. The parallelization strategy used is a 2D domain decomposition. This report describes the parallelization strategy, the parallel I/O organization, the influence of different domain decomposition approaches for static and dynamic load imbalances and first numerical results. (orig.)

  7. Parallel Jacobi EVD Methods on Integrated Circuits

    Directory of Open Access Journals (Sweden)

    Chi-Chia Sun

    2014-01-01

    Full Text Available Design strategies for parallel iterative algorithms are presented. In order to further study different tradeoff strategies in design criteria for integrated circuits, A 10 × 10 Jacobi Brent-Luk-EVD array with the simplified μ-CORDIC processor is used as an example. The experimental results show that using the μ-CORDIC processor is beneficial for the design criteria as it yields a smaller area, faster overall computation time, and less energy consumption than the regular CORDIC processor. It is worth to notice that the proposed parallel EVD method can be applied to real-time and low-power array signal processing algorithms performing beamforming or DOA estimation.

  8. Parallel pic plasma simulation through particle decomposition techniques

    International Nuclear Information System (INIS)

    Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'

    1998-02-01

    Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it

  9. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  10. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  11. Increasing phylogenetic resolution at low taxonomic levels using massively parallel sequencing of chloroplast genomes

    Directory of Open Access Journals (Sweden)

    Cronn Richard

    2009-12-01

    Full Text Available Abstract Background Molecular evolutionary studies share the common goal of elucidating historical relationships, and the common challenge of adequately sampling taxa and characters. Particularly at low taxonomic levels, recent divergence, rapid radiations, and conservative genome evolution yield limited sequence variation, and dense taxon sampling is often desirable. Recent advances in massively parallel sequencing make it possible to rapidly obtain large amounts of sequence data, and multiplexing makes extensive sampling of megabase sequences feasible. Is it possible to efficiently apply massively parallel sequencing to increase phylogenetic resolution at low taxonomic levels? Results We reconstruct the infrageneric phylogeny of Pinus from 37 nearly-complete chloroplast genomes (average 109 kilobases each of an approximately 120 kilobase genome generated using multiplexed massively parallel sequencing. 30/33 ingroup nodes resolved with ≥ 95% bootstrap support; this is a substantial improvement relative to prior studies, and shows massively parallel sequencing-based strategies can produce sufficient high quality sequence to reach support levels originally proposed for the phylogenetic bootstrap. Resampling simulations show that at least the entire plastome is necessary to fully resolve Pinus, particularly in rapidly radiating clades. Meta-analysis of 99 published infrageneric phylogenies shows that whole plastome analysis should provide similar gains across a range of plant genera. A disproportionate amount of phylogenetic information resides in two loci (ycf1, ycf2, highlighting their unusual evolutionary properties. Conclusion Plastome sequencing is now an efficient option for increasing phylogenetic resolution at lower taxonomic levels in plant phylogenetic and population genetic analyses. With continuing improvements in sequencing capacity, the strategies herein should revolutionize efforts requiring dense taxon and character sampling

  12. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  13. The Songbird Neurogenomics (SoNG Initiative: Community-based tools and strategies for study of brain gene function and evolution

    Directory of Open Access Journals (Sweden)

    Lewin Harris A

    2008-03-01

    Full Text Available Abstract Background Songbirds hold great promise for biomedical, environmental and evolutionary research. A complete draft sequence of the zebra finch genome is imminent, yet a need remains for application of genomic resources within a research community traditionally focused on ethology and neurobiological methods. In response, we developed a core set of genomic tools and a novel collaborative strategy to probe gene expression in diverse songbird species and natural contexts. Results We end-sequenced cDNAs from zebra finch brain and incorporated additional sequences from community sources into a database of 86,784 high quality reads. These assembled into 31,658 non-redundant contigs and singletons, which we annotated via BLAST search of chicken and human databases. The results are publicly available in the ESTIMA:Songbird database. We produced a spotted cDNA microarray with 20,160 addresses representing 17,214 non-redundant products of an estimated 11,500–15,000 genes, validating it by analysis of immediate-early gene (zenk gene activation following song exposure and by demonstrating effective cross hybridization to genomic DNAs of other songbird species in the Passerida Parvorder. Our assembly was also used in the design of the "Lund-zfa" Affymetrix array representing ~22,000 non-redundant sequences. When the two arrays were hybridized to cDNAs from the same set of male and female zebra finch brain samples, both arrays detected a common set of regulated transcripts with a Pearson correlation coefficient of 0.895. To stimulate use of these resources by the songbird research community and to maintain consistent technical standards, we devised a "Community Collaboration" mechanism whereby individual birdsong researchers develop experiments and provide tissues, but a single individual in the community is responsible for all RNA extractions, labelling and microarray hybridizations. Conclusion Immediately, these results set the foundation for a

  14. Two-dimensional porous architecture of protonated GCN and reduced graphene oxide via electrostatic self-assembly strategy for high photocatalytic hydrogen evolution under visible light

    Energy Technology Data Exchange (ETDEWEB)

    Pu, Chenchen; Wan, Jun; Liu, Enzhou; Yin, Yunchao; Li, Juan; Ma, Yongning [School of Chemical Engineering, Northwest University, Xi’an 710069 (China); Fan, Jun, E-mail: fanjun@nwu.edu.cn [School of Chemical Engineering, Northwest University, Xi’an 710069 (China); Hu, Xiaoyun, E-mail: hxy3275@nwu.edu.cn [School of Physics, Northwest University, Xi’an 710069 (China)

    2017-03-31

    Highlights: • The protonated GCN (pGCN) is prepared by acidic cutting and hydrothermal process. • The pGCN coupled with rGO are synthesized via electrostatic self-assembly strategy. • The pGCN-5 wt% rGO is obtained with a high specific surface area of 115.64 m{sup 2}g{sup −1}. • The pGCN-5 wt% rGO photocatalysts exhibit superb photocatalytic reduction capacity. - Abstract: Herein, porous protonated graphitic carbon nitride (pGCN) is prepared from bulk g-C{sub 3}N{sub 4} (GCN) directly by acidic cutting and hydrothermal process. The holey structure not only provides a lot of bounds on the accelerated and photo induced charge transfer and thus reduce the aggregation, but also endows the GCN with more exposure to the active site. The pGCN is obtained with an increased band gap of 2.91 eV together with a higher specific surface area of 82.76 m{sup 2}g{sup −1}. Meanwhile, the positively charged GCN resulted from the protonation pretreatment is beneficial for improving the interaction with negatively charged GO sheets. Compared with GCN, pGCN-rGO displays a significant decrease of PL intensities and an apparently enhancement of visible-light absorption, resulting a lower charge recombination rate and a better light absorption. Besides, the enhanced charge separation is demonstrated by photoluminescence emission spectroscopy and the transient photocurrent measurement. The photocatalytic performance studies for the degradation of MB indicate that pGCN-rGO exhibits the highest adsorption ability towards dye molecules. In addition, the pGCN-5 wt% rGO composite shows the optimal photocatalytic activity, the photodegradation rate of MB is 99.4% after 80 min of irradiation and the H{sub 2} evolution performance up to 557 μmol g{sup −1}h{sup −1} under visible light, which is much higher than the other control samples.

  15. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  16. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  17. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  18. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  19. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  20. Parallel Evolutionary Optimization of Multibody Systems with Application to Railway Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Eberhard, Peter [University of Erlangen-Nuremberg, Institute of Applied Mechanics (Germany)], E-mail: eberhard@ltm.uni-erlangen.de; Dignath, Florian [University of Stuttgart, Institute B of Mechanics (Germany)], E-mail: fd@mechb.uni-stuttgart.de; Kuebler, Lars [University of Erlangen-Nuremberg, Institute of Applied Mechanics (Germany)], E-mail: kuebler@ltm.uni-erlangen.de

    2003-03-15

    The optimization of multibody systems usually requires many costly criteria computations since the equations of motion must be evaluated by numerical time integration for each considered design. For actively controlled or flexible multibody systems additional difficulties arise as the criteria may contain non-differentiable points or many local minima. Therefore, in this paper a stochastic evolution strategy is used in combination with parallel computing in order to reduce the computation times whilst keeping the inherent robustness. For the parallelization a master-slave approach is used in a heterogeneous workstation/PC cluster. The pool-of-tasks concept is applied in order to deal with the frequently changing workloads of different machines in the cluster. In order to analyze the performance of the parallel optimization method, the suspension of an ICE passenger coach, modeled as an elastic multibody system, is optimized simultaneously with regard to several criteria including vibration damping and a criterion related to safety against derailment. The iterative and interactive nature of a typical optimization process for technical systems is emphasized.

  1. Parallel Evolutionary Optimization of Multibody Systems with Application to Railway Dynamics

    International Nuclear Information System (INIS)

    Eberhard, Peter; Dignath, Florian; Kuebler, Lars

    2003-01-01

    The optimization of multibody systems usually requires many costly criteria computations since the equations of motion must be evaluated by numerical time integration for each considered design. For actively controlled or flexible multibody systems additional difficulties arise as the criteria may contain non-differentiable points or many local minima. Therefore, in this paper a stochastic evolution strategy is used in combination with parallel computing in order to reduce the computation times whilst keeping the inherent robustness. For the parallelization a master-slave approach is used in a heterogeneous workstation/PC cluster. The pool-of-tasks concept is applied in order to deal with the frequently changing workloads of different machines in the cluster. In order to analyze the performance of the parallel optimization method, the suspension of an ICE passenger coach, modeled as an elastic multibody system, is optimized simultaneously with regard to several criteria including vibration damping and a criterion related to safety against derailment. The iterative and interactive nature of a typical optimization process for technical systems is emphasized

  2. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  3. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  4. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  5. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    Science.gov (United States)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  6. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  7. A parallel adaptive finite element simplified spherical harmonics approximation solver for frequency domain fluorescence molecular imaging

    International Nuclear Information System (INIS)

    Lu Yujie; Zhu Banghe; Rasmussen, John C; Sevick-Muraca, Eva M; Shen Haiou; Wang Ge

    2010-01-01

    Fluorescence molecular imaging/tomography may play an important future role in preclinical research and clinical diagnostics. Time- and frequency-domain fluorescence imaging can acquire more measurement information than the continuous wave (CW) counterpart, improving the image quality of fluorescence molecular tomography. Although diffusion approximation (DA) theory has been extensively applied in optical molecular imaging, high-order photon migration models need to be further investigated to match quantitation provided by nuclear imaging. In this paper, a frequency-domain parallel adaptive finite element solver is developed with simplified spherical harmonics (SP N ) approximations. To fully evaluate the performance of the SP N approximations, a fast time-resolved tetrahedron-based Monte Carlo fluorescence simulator suitable for complex heterogeneous geometries is developed using a convolution strategy to realize the simulation of the fluorescence excitation and emission. The validation results show that high-order SP N can effectively correct the modeling errors of the diffusion equation, especially when the tissues have high absorption characteristics or when high modulation frequency measurements are used. Furthermore, the parallel adaptive mesh evolution strategy improves the modeling precision and the simulation speed significantly on a realistic digital mouse phantom. This solver is a promising platform for fluorescence molecular tomography using high-order approximations to the radiative transfer equation.

  8. Evolution of morphological and climatic adaptations in Veronica L. (Plantaginaceae

    Directory of Open Access Journals (Sweden)

    Jian-Cheng Wang

    2016-08-01

    Full Text Available Perennials and annuals apply different strategies to adapt to the adverse environment, based on ‘tolerance’ and ‘avoidance’, respectively. To understand lifespan evolution and its impact on plant adaptability, we carried out a comparative study of perennials and annuals in the genus Veronica from a phylogenetic perspective. The results showed that ancestors of the genus Veronicawere likely to be perennial plants. Annual life history of Veronica has evolved multiple times and subtrees with more annual species have a higher substitution rate. Annuals can adapt to more xeric habitats than perennials. This indicates that annuals are more drought-resistant than their perennial relatives. Due to adaptation to similar selective pressures, parallel evolution occurs in morphological characters among annual species of Veronica.

  9. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  10. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  11. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  12. Parallel science and engineering applications the Charm++ approach

    CERN Document Server

    Kale, Laxmikant V

    2016-01-01

    Developed in the context of science and engineering applications, with each abstraction motivated by and further honed by specific application needs, Charm++ is a production-quality system that runs on almost all parallel computers available. Parallel Science and Engineering Applications: The Charm++ Approach surveys a diverse and scalable collection of science and engineering applications, most of which are used regularly on supercomputers by scientists to further their research. After a brief introduction to Charm++, the book presents several parallel CSE codes written in the Charm++ model, along with their underlying scientific and numerical formulations, explaining their parallelization strategies and parallel performance. These chapters demonstrate the versatility of Charm++ and its utility for a wide variety of applications, including molecular dynamics, cosmology, quantum chemistry, fracture simulations, agent-based simulations, and weather modeling. The book is intended for a wide audience of people i...

  13. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  14. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  15. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  16. Vacuum Large Current Parallel Transfer Numerical Analysis

    Directory of Open Access Journals (Sweden)

    Enyuan Dong

    2014-01-01

    Full Text Available The stable operation and reliable breaking of large generator current are a difficult problem in power system. It can be solved successfully by the parallel interrupters and proper timing sequence with phase-control technology, in which the strategy of breaker’s control is decided by the time of both the first-opening phase and second-opening phase. The precise transfer current’s model can provide the proper timing sequence to break the generator circuit breaker. By analysis of the transfer current’s experiments and data, the real vacuum arc resistance and precise correctional model in the large transfer current’s process are obtained in this paper. The transfer time calculated by the correctional model of transfer current is very close to the actual transfer time. It can provide guidance for planning proper timing sequence and breaking the vacuum generator circuit breaker with the parallel interrupters.

  17. Flexibility and Performance of Parallel File Systems

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1996-01-01

    As we gain experience with parallel file systems, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk-management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make applications difficult to port. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all platforms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application-programmer interfaces (API's). We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.

  18. Parallel algorithms for boundary value problems

    Science.gov (United States)

    Lin, Avi

    1991-01-01

    A general approach to solve boundary value problems numerically in a parallel environment is discussed. The basic algorithm consists of two steps: the local step where all the P available processors work in parallel, and the global step where one processor solves a tridiagonal linear system of the order P. The main advantages of this approach are twofold. First, this suggested approach is very flexible, especially in the local step and thus the algorithm can be used with any number of processors and with any of the SIMD or MIMD machines. Secondly, the communication complexity is very small and thus can be used as easily with shared memory machines. Several examples for using this strategy are discussed.

  19. Neural nets for massively parallel optimization

    Science.gov (United States)

    Dixon, Laurence C. W.; Mills, David

    1992-07-01

    To apply massively parallel processing systems to the solution of large scale optimization problems it is desirable to be able to evaluate any function f(z), z (epsilon) Rn in a parallel manner. The theorem of Cybenko, Hecht Nielsen, Hornik, Stinchcombe and White, and Funahasi shows that this can be achieved by a neural network with one hidden layer. In this paper we address the problem of the number of nodes required in the layer to achieve a given accuracy in the function and gradient values at all points within a given n dimensional interval. The type of activation function needed to obtain nonsingular Hessian matrices is described and a strategy for obtaining accurate minimal networks presented.

  20. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    International Nuclear Information System (INIS)

    Lu Liuyan; Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.

    2009-01-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f m pi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  1. Parallel and distributed processing in two SGBDS: A case study

    OpenAIRE

    Francisco Javier Moreno; Nataly Castrillón Charari; Camilo Taborda Zuluaga

    2017-01-01

    Context: One of the strategies for managing large volumes of data is distributed and parallel computing. Among the tools that allow applying these characteristics are some Data Base Management Systems (DBMS), such as Oracle, DB2, and SQL Server. Method: In this paper we present a case study where we evaluate the performance of an SQL query in two of these DBMS. The evaluation is done through various forms of data distribution in a computer network with different degrees of parallelism. ...

  2. Fluorous Parallel Synthesis of A Hydantoin/Thiohydantoin Library

    OpenAIRE

    Lu, Yimin; Zhang, Wei

    2005-01-01

    Fluorous tagging strategy is applied to solution-phase parallel synthesis of a library containing hydantoin and thiohydantoin analogs. Two perfluoroalkyl (Rf)-tagged α-amino esters each react with 6 aromatic aldehydes under reductive amination conditions. Twelve amino esters then each react with 10 isocyanates and isothiocyanates in parallel. The resulting 120 ureas and thioureas undergo spontaneous cyclization to form the corresponding hydantoins and thiohydantoins. The intermediate and fina...

  3. Chromosomal Evolution in Chiroptera.

    Science.gov (United States)

    Sotero-Caio, Cibele G; Baker, Robert J; Volleth, Marianne

    2017-10-13

    Chiroptera is the second largest order among mammals, with over 1300 species in 21 extant families. The group is extremely diverse in several aspects of its natural history, including dietary strategies, ecology, behavior and morphology. Bat genomes show ample chromosome diversity (from 2n = 14 to 62). As with other mammalian orders, Chiroptera is characterized by clades with low, moderate and extreme chromosomal change. In this article, we will discuss trends of karyotypic evolution within distinct bat lineages (especially Phyllostomidae, Hipposideridae and Rhinolophidae), focusing on two perspectives: evolution of genome architecture, modes of chromosomal evolution, and the use of chromosome data to resolve taxonomic problems.

  4. Chromosomal Evolution in Chiroptera

    Directory of Open Access Journals (Sweden)

    Cibele G. Sotero-Caio

    2017-10-01

    Full Text Available Chiroptera is the second largest order among mammals, with over 1300 species in 21 extant families. The group is extremely diverse in several aspects of its natural history, including dietary strategies, ecology, behavior and morphology. Bat genomes show ample chromosome diversity (from 2n = 14 to 62. As with other mammalian orders, Chiroptera is characterized by clades with low, moderate and extreme chromosomal change. In this article, we will discuss trends of karyotypic evolution within distinct bat lineages (especially Phyllostomidae, Hipposideridae and Rhinolophidae, focusing on two perspectives: evolution of genome architecture, modes of chromosomal evolution, and the use of chromosome data to resolve taxonomic problems.

  5. Identification of Novel Betaherpesviruses in Iberian Bats Reveals Parallel Evolution

    OpenAIRE

    Pozo, Francisco; Juste, Javier; Vázquez-Morón, Sonia; Anar-López, Carolina; Ibáñez, Carlos; Garin, Inazio; Aihartza, Joxerra; Casas, Inmaculada; Tenorio, Antonio; Echevarría, Juan E.

    2016-01-01

    A thorough search for bat herpesviruses was carried out in oropharyngeal samples taken from most of the bat species present in the Iberian Peninsula from the Vespertilionidae, Miniopteridae, Molossidae and Rhinolophidae families, in addition to a colony of captive fruit bats from the Pteropodidae family. By using two degenerate consensus PCR methods targeting two conserved genes, distinct and previously unrecognized bat-hosted herpesviruses were identified for the most of the tested species. ...

  6. Parallel evolution of storage roots in Morning Glories (Convolvulaceae)

    Science.gov (United States)

    Storage roots are an ecologically and agriculturally important plant trait. In morning glories, storage roots are well characterized in the crop species sweetpotato. Storage roots have evolved numerous times across the morning glory family. This study aims to understand whether this was through para...

  7. Parallel evolution of tumor subclones mimics diversity between tumors

    DEFF Research Database (Denmark)

    Martinez, Pierre; Birkbak, Nicolai Juul; Gerlinger, Marco

    2013-01-01

    Intratumor heterogeneity (ITH) may foster tumor adaptation and compromise the efficacy of personalized medicines approaches. The scale of heterogeneity within a tumor (intratumor heterogeneity) relative to genetic differences between tumors (intertumor heterogeneity) is unknown. To address this, ...

  8. Species specificity in major urinary proteins by parallel evolution.

    Directory of Open Access Journals (Sweden)

    Darren W Logan

    Full Text Available Species-specific chemosignals, pheromones, regulate social behaviors such as aggression, mating, pup-suckling, territory establishment, and dominance. The identity of these cues remains mostly undetermined and few mammalian pheromones have been identified. Genetically-encoded pheromones are expected to exhibit several different mechanisms for coding 1 diversity, to enable the signaling of multiple behaviors, 2 dynamic regulation, to indicate age and dominance, and 3 species-specificity. Recently, the major urinary proteins (Mups have been shown to function themselves as genetically-encoded pheromones to regulate species-specific behavior. Mups are multiple highly related proteins expressed in combinatorial patterns that differ between individuals, gender, and age; which are sufficient to fulfill the first two criteria. We have now characterized and fully annotated the mouse Mup gene content in detail. This has enabled us to further analyze the extent of Mup coding diversity and determine their potential to encode species-specific cues.Our results show that the mouse Mup gene cluster is composed of two subgroups: an older, more divergent class of genes and pseudogenes, and a second class with high sequence identity formed by recent sequential duplications of a single gene/pseudogene pair. Previous work suggests that truncated Mup pseudogenes may encode a family of functional hexapeptides with the potential for pheromone activity. Sequence comparison, however, reveals that they have limited coding potential. Similar analyses of nine other completed genomes find Mup gene expansions in divergent lineages, including those of rat, horse and grey mouse lemur, occurring independently from a single ancestral Mup present in other placental mammals. Our findings illustrate that increasing genomic complexity of the Mup gene family is not evolutionarily isolated, but is instead a recurring mechanism of generating coding diversity consistent with a species-specific function in mammals.

  9. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  10. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  11. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  12. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  13. Parallel community climate model: Description and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  14. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  15. Local and Nonlocal Parallel Heat Transport in General Magnetic Fields

    International Nuclear Information System (INIS)

    Castillo-Negrete, D. del; Chacon, L.

    2011-01-01

    A novel approach for the study of parallel transport in magnetized plasmas is presented. The method avoids numerical pollution issues of grid-based formulations and applies to integrable and chaotic magnetic fields with local or nonlocal parallel closures. In weakly chaotic fields, the method gives the fractal structure of the devil's staircase radial temperature profile. In fully chaotic fields, the temperature exhibits self-similar spatiotemporal evolution with a stretched-exponential scaling function for local closures and an algebraically decaying one for nonlocal closures. It is shown that, for both closures, the effective radial heat transport is incompatible with the quasilinear diffusion model.

  16. Parallel implementations of 2D explicit Euler solvers

    International Nuclear Information System (INIS)

    Giraud, L.; Manzini, G.

    1996-01-01

    In this work we present a subdomain partitioning strategy applied to an explicit high-resolution Euler solver. We describe the design of a portable parallel multi-domain code suitable for parallel environments. We present several implementations on a representative range of MlMD computers that include shared memory multiprocessors, distributed virtual shared memory computers, as well as networks of workstations. Computational results are given to illustrate the efficiency, the scalability, and the limitations of the different approaches. We discuss also the effect of the communication protocol on the optimal domain partitioning strategy for the distributed memory computers

  17. Parallel computing solution of Boltzmann neutron transport equation

    International Nuclear Information System (INIS)

    Ansah-Narh, T.

    2010-01-01

    The focus of the research was on developing parallel computing algorithm for solving Eigen-values of the Boltzmam Neutron Transport Equation (BNTE) in a slab geometry using multi-grid approach. In response to the problem of slow execution of serial computing when solving large problems, such as BNTE, the study was focused on the design of parallel computing systems which was an evolution of serial computing that used multiple processing elements simultaneously to solve complex physical and mathematical problems. Finite element method (FEM) was used for the spatial discretization scheme, while angular discretization was accomplished by expanding the angular dependence in terms of Legendre polynomials. The eigenvalues representing the multiplication factors in the BNTE were determined by the power method. MATLAB Compiler Version 4.1 (R2009a) was used to compile the MATLAB codes of BNTE. The implemented parallel algorithms were enabled with matlabpool, a Parallel Computing Toolbox function. The option UseParallel was set to 'always' and the default value of the option was 'never'. When those conditions held, the solvers computed estimated gradients in parallel. The parallel computing system was used to handle all the bottlenecks in the matrix generated from the finite element scheme and each domain of the power method generated. The parallel algorithm was implemented on a Symmetric Multi Processor (SMP) cluster machine, which had Intel 32 bit quad-core x 86 processors. Convergence rates and timings for the algorithm on the SMP cluster machine were obtained. Numerical experiments indicated the designed parallel algorithm could reach perfect speedup and had good stability and scalability. (au)

  18. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  19. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  20. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  1. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  2. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  3. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  4. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  5. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  6. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  7. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  8. Event monitoring of parallel computations

    Directory of Open Access Journals (Sweden)

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  9. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  10. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  11. The advertising strategies

    Institute of Scientific and Technical Information of China (English)

    YAKOUBI Mohamed Lamine

    2013-01-01

    We will try to demonstrate, through the display of the various advertising creation strategies and their evolution, how the advertising communication passed from of a vision or a strategy focused on the product, to a vision focused on the brand. The first advertising strategy that was applied by advertising agencies is the"Unique Selling Proposition";it focused only on the product advantages and its philosophy dominated the advertising world, throughout its various evolutions, till the nineties but this is without counting the introduction of the new advertising strategies that brought a more brand oriented philosophy to the ground.

  12. Curious parallels and curious connections--phylogenetic thinking in biology and historical linguistics.

    Science.gov (United States)

    Atkinson, Quentin D; Gray, Russell D

    2005-08-01

    In The Descent of Man (1871), Darwin observed "curious parallels" between the processes of biological and linguistic evolution. These parallels mean that evolutionary biologists and historical linguists seek answers to similar questions and face similar problems. As a result, the theory and methodology of the two disciplines have evolved in remarkably similar ways. In addition to Darwin's curious parallels of process, there are a number of equally curious parallels and connections between the development of methods in biology and historical linguistics. Here we briefly review the parallels between biological and linguistic evolution and contrast the historical development of phylogenetic methods in the two disciplines. We then look at a number of recent studies that have applied phylogenetic methods to language data and outline some current problems shared by the two fields.

  13. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  14. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  15. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  16. An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Haiyan Gu

    2018-04-01

    Full Text Available Remote sensing (RS image segmentation is an essential step in geographic object-based image analysis (GEOBIA to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA. Specifically, a minimum spanning tree (MST algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 and different selected landscapes (residential/industrial, residential/agriculture covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral, while the accuracy is comparable with that of the FNEA method.

  17. An object-oriented programming paradigm for parallelization of computational fluid dynamics

    International Nuclear Information System (INIS)

    Ohta, Takashi.

    1997-03-01

    We propose an object-oriented programming paradigm for parallelization of scientific computing programs, and show that the approach can be a very useful strategy. Generally, parallelization of scientific programs tends to be complicated and unportable due to the specific requirements of each parallel computer or compiler. In this paper, we show that the object-oriented programming design, which separates the parallel processing parts from the solver of the applications, can achieve the large improvement in the maintenance of the codes, as well as the high portability. We design the program for the two-dimensional Euler equations according to the paradigm, and evaluate the parallel performance on IBM SP2. (author)

  18. A Facile and Waste-Free Strategy to Fabricate Pt-C/TiO2 Microspheres: Enhanced Photocatalytic Performance for Hydrogen Evolution

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    Full Text Available A facile and waste-free flame thermal synthesis method was developed for preparing Pt modified C/TiO2 microspheres (Pt-C/TiO2. The photocatalysts were characterized with X-ray diffraction, field emission scanning electron microscopy, transmission electron microscope, ultraviolet-visible (UV-vis diffuse reflectance spectra, X-ray photoelectron spectroscopy, and thermogravimetry analysis. The photocatalytic activity was evaluated by hydrogen evolution from water splitting under UV-vis light illumination. Benefitting from the electron-hole separation behavior and reduced overpotential of H+/H2, remarkably enhanced hydrogen production was demonstrated and the photocatalytic hydrogen generation from 0.4 wt% Pt-C/TiO2 increased by 22 times. This study also demonstrates that the novel and facile method is highly attractive, due to its easy operation, requiring no post treatment and energy-saving features.

  19. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  20. Comparative eye-tracking evaluation of scatterplots and parallel coordinates

    Directory of Open Access Journals (Sweden)

    Rudolf Netzel

    2017-06-01

    Full Text Available We investigate task performance and reading characteristics for scatterplots (Cartesian coordinates and parallel coordinates. In a controlled eye-tracking study, we asked 24 participants to assess the relative distance of points in multidimensional space, depending on the diagram type (parallel coordinates or a horizontal collection of scatterplots, the number of data dimensions (2, 4, 6, or 8, and the relative distance between points (15%, 20%, or 25%. For a given reference point and two target points, we instructed participants to choose the target point that was closer to the reference point in multidimensional space. We present a visual scanning model that describes different strategies to solve this retrieval task for both diagram types, and propose corresponding hypotheses that we test using task completion time, accuracy, and gaze positions as dependent variables. Our results show that scatterplots outperform parallel coordinates significantly in 2 dimensions, however, the task was solved more quickly and more accurately with parallel coordinates in 8 dimensions. The eye-tracking data further shows significant differences between Cartesian and parallel coordinates, as well as between different numbers of dimensions. For parallel coordinates, there is a clear trend toward shorter fixations and longer saccades with increasing number of dimensions. Using an area-of-interest (AOI based approach, we identify different reading strategies for each diagram type: For parallel coordinates, the participants’ gaze frequently jumped back and forth between pairs of axes, while axes were rarely focused on when viewing Cartesian coordinates. We further found that participants’ attention is biased: toward the center of the whole plotfor parallel coordinates and skewed to the center/left side for Cartesian coordinates. We anticipate that these results may support the design of more effective visualizations for multidimensional data.

  1. Parallelization of MCNP 4, a Monte Carlo neutron and photon transport code system, in highly parallel distributed memory type computer

    International Nuclear Information System (INIS)

    Masukawa, Fumihiro; Takano, Makoto; Naito, Yoshitaka; Yamazaki, Takao; Fujisaki, Masahide; Suzuki, Koichiro; Okuda, Motoi.

    1993-11-01

    In order to improve the accuracy and calculating speed of shielding analyses, MCNP 4, a Monte Carlo neutron and photon transport code system, has been parallelized and measured of its efficiency in the highly parallel distributed memory type computer, AP1000. The code has been analyzed statically and dynamically, then the suitable algorithm for parallelization has been determined for the shielding analysis functions of MCNP 4. This includes a strategy where a new history is assigned to the idling processor element dynamically during the execution. Furthermore, to avoid the congestion of communicative processing, the batch concept, processing multi-histories by a unit, has been introduced. By analyzing a sample cask problem with 2,000,000 histories by the AP1000 with 512 processor elements, the 82 % of parallelization efficiency is achieved, and the calculational speed has been estimated to be around 50 times as fast as that of FACOM M-780. (author)

  2. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  3. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  4. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  5. Boltzmann machines as a model for parallel annealing

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.

    1991-01-01

    The potential of Boltzmann machines to cope with difficult combinatorial optimization problems is investigated. A discussion of various (parallel) models of Boltzmann machines is given based on the theory of Markov chains. A general strategy is presented for solving (approximately) combinatorial

  6. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  7. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  8. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  9. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  10. Parallelization Issues and Particle-In Codes.

    Science.gov (United States)

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on

  11. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  12. Evolution 2.0

    DEFF Research Database (Denmark)

    Andersen, Casper; Bek-Thomsen, Jakob; Clasen, Mathias

    2013-01-01

    Studies in the history of science and education have documented that the reception and understanding of evolutionary theory is highly contingent on local factors such as school systems, cultural traditions, religious beliefs, and language. This has important implications for teaching evolution...... audiences readily available. As more and more schools require teachers to use low cost or free web-based materials, in the research community we need to take seriously how to facilitate that demand in communication strategies on evolution. This article addresses this challenge by presenting the learning...

  13. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  14. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  15. Parallel heat transport in integrable and chaotic magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Castillo-Negrete, D. del; Chacon, L. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-8071 (United States)

    2012-05-15

    The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion, space plasmas, and astrophysics research. Three issues make this problem particularly challenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), {chi}{sub ||} , and the perpendicular, {chi}{sub Up-Tack }, conductivities ({chi}{sub ||} /{chi}{sub Up-Tack} may exceed 10{sup 10} in fusion plasmas); (ii) Nonlocal parallel transport in the limit of small collisionality; and (iii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates. Motivated by these issues, we present a Lagrangian Green's function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields in arbitrary geometry. The method avoids by construction the numerical pollution issues of grid-based algorithms. The potential of the approach is demonstrated with nontrivial applications to integrable (magnetic island), weakly chaotic (Devil's staircase), and fully chaotic magnetic field configurations. For the latter, numerical solutions of the parallel heat transport equation show that the effective radial transport, with local and non-local parallel closures, is non-diffusive, thus casting doubts on the applicability of quasilinear diffusion descriptions. General conditions for the existence of non-diffusive, multivalued flux-gradient relations in the temperature evolution are derived.

  16. Modelling and parallel calculation of a kinetic boundary layer

    International Nuclear Information System (INIS)

    Perlat, Jean Philippe

    1998-01-01

    This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr

  17. Parallel adaptation of a vectorised quantumchemical program system

    International Nuclear Information System (INIS)

    Van Corler, L.C.H.; Van Lenthe, J.H.

    1987-01-01

    Supercomputers, like the CRAY 1 or the Cyber 205, have had, and still have, a marked influence on Quantum Chemistry. Vectorization has led to a considerable increase in the performance of Quantum Chemistry programs. However, clockcycle times more than a factor 10 smaller than those of the present supercomputers are not to be expected. Therefore future supercomputers will have to depend on parallel structures. Recently, the first examples of such supercomputers have been installed. To be prepared for this new generation of (parallel) supercomputers one should consider the concepts one wants to use and the kind of problems one will encounter during implementation of existing vectorized programs on those parallel systems. The authors implemented four important parts of a large quantumchemical program system (ATMOL), i.e. integrals, SCF, 4-index and Direct-CI in the parallel environment at ECSEC (Rome, Italy). This system offers simulated parallellism on the host computer (IBM 4381) and real parallellism on at most 10 attached processors (FPS-164). Quantumchemical programs usually handle large amounts of data and very large, often sparse matrices. The transfer of that many data can cause problems concerning communication and overhead, in view of which shared memory and shared disks must be considered. The strategy and the tools that were used to parallellise the programs are shown. Also, some examples are presented to illustrate effectiveness and performance of the system in Rome for these type of calculations

  18. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  19. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  20. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  1. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  2. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  3. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  4. Parallel plate transmission line transformer

    NARCIS (Netherlands)

    Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.

    2011-01-01

    A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the

  5. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  6. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  7. Schumpeter's Evolution

    DEFF Research Database (Denmark)

    Andersen, Esben Sloth

    reworking of his basic theory of economic evolution in Development from 1934, and this reworking was continued in Cycles from 1939. Here Schumpeter also tried to handle the statistical and historical evidence on the waveform evolution of the capitalist economy. Capitalism from 1942 modified the model...

  8. Galactic evolution

    International Nuclear Information System (INIS)

    Pagel, B.

    1979-01-01

    Ideas are considered concerning the evolution of galaxies which are closely related to those of stellar evolution and the origin of elements. Using information obtained from stellar spectra, astronomers are now able to consider an underlying process to explain the distribution of various elements in the stars, gas and dust clouds of the galaxies. (U.K.)

  9. Darwinian evolution

    NARCIS (Netherlands)

    Jagers op Akkerhuis, Gerard A.J.M.; Spijkerboer, Hendrik Pieter; Koelewijn, Hans Peter

    2016-01-01

    Darwinian evolution is a central tenet in biology. Conventionally, the defi nition of Darwinian evolution is linked to a population-based process that can be measured by focusing on changes in DNA/allele frequencies. However, in some publications it has been suggested that selection represents a

  10. Evolution of Ore Deposits and Technology Transfer Project: Isotope and Chemical Methods in Support of the U.S. Geological Survey Science Strategy, 2003-2008

    Science.gov (United States)

    Rye, Robert O.; Johnson, Craig A.; Landis, Gary P.; Hofstra, Albert H.; Emsbo, Poul; Stricker, Craig A.; Hunt, Andrew G.; Rusk, Brian G.

    2010-01-01

    Principal functions of the U.S. Geological Survey (USGS) Mineral Resources Program are providing assessments of the location, quantity, and quality of undiscovered mineral deposits, and predicting the environmental impacts of exploration and mine development. The mineral and environmental assessments of domestic deposits are used by planners and decisionmakers to improve the stewardship of public lands and public resources. Assessments of undiscovered mineral deposits on a global scale reveal the potential availability of minerals to the United States and other countries that manufacture goods imported to the United States. These resources are of fundamental relevance to national and international economic and security policy in our globalized world economy. Performing mineral and environmental assessments requires that predictions be made of the likelihood of undiscovered deposits. The predictions are based on geologic and geoenvironmental models that are constructed for the diverse types of mineral deposits from detailed descriptions of actual deposits and detailed understanding of the processes that formed them. Over the past three decades the understanding of ore-forming processes has benefited greatly from the integration of laboratory-based geochemical tools with field observations and other data sources. Under the aegis of the Evolution of Ore Deposits and Technology Transfer Project (referred to hereinafter as the Project), a 5-year effort that terminated in 2008, the Mineral Resources Program provided state-of-the-art analytical capabilities to support applications of several related geochemical tools to ore-deposit-related studies. The analytical capabilities and scientific approaches developed within the Project have wide applicability within Earth-system science. For this reason the Project Laboratories represent a valuable catalyst for interdisciplinary collaborations of the type that should be formed in the coming years for the United States to meet

  11. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  12. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  13. The Watershed Transform : Definitions, Algorithms and Parallelization Strategies

    NARCIS (Netherlands)

    Roerdink, Jos B.T.M.; Meijster, Arnold

    2000-01-01

    The watershed transform is the method of choice for image segmentation in the field of mathematical morphology. We present a critical review of several definitions of the watershed transform and the associated sequential algorithms, and discuss various issues which often cause confusion in the

  14. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  15. Decomposition based parallel processing technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2000-01-01

    In practical design studies, most of designers solve multidisciplinary problems with complex design structure. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder original design processes to minimize total cost and time. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  16. Parallel simulation of radio-frequency plasma discharges

    International Nuclear Information System (INIS)

    Fivaz, M.; Howling, A.; Ruegsegger, L.; Schwarzenbach, W.; Baeumle, B.

    1994-01-01

    The 1D Particle-In-Cell and Monte Carlo collision code XPDP1 is used to model radio-frequency argon plasma discharges. The code runs faster on a single-user parallel system called MUSIC than on a CRAY-YMP. The low cost of the MUSIC system allows a 24-hours-per-day use and the simulation results are available one to two orders of magnitude quicker than with a super computer shared with other users. The parallelization strategy and its implementation are discussed. Very good agreement is found between simulation results and measurements done in an experimental argon discharge. (author) 2 figs., 3 refs

  17. An Introduction to Parallelism, Concurrency and Acceleration (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Concurrency and parallelism are firm elements of any modern computing infrastructure, made even more prominent by the emergence of accelerators. These lectures offer an introduction to these important concepts. We will begin with a brief refresher of recent hardware offerings to modern-day programmers. We will then open the main discussion with an overview of the laws and practical aspects of scalability. Key parallelism data structures, patterns and algorithms will be shown. The main threats to scalability and mitigation strategies will be discussed in the context of real-life optimization problems.

  18. Carbon-encapsulated nickel-cobalt alloys nanoparticles fabricated via new post-treatment strategy for hydrogen evolution in alkaline media

    Science.gov (United States)

    Guo, Hailing; Youliwasi, Nuerguli; Zhao, Lei; Chai, Yongming; Liu, Chenguang

    2018-03-01

    This paper addresses a new post-treatment strategy for the formation of carbon-encapsulated nickel-cobalt alloys nanoparticles, which is easily controlled the performance of target products via changing precursor composition, calcination conditions (e.g., temperature and atmosphere) and post-treatment condition. Glassy carbon electrode (GCE) modified by the as-obtained carbon-encapsulated mono- and bi-transition metal nanoparticles exhibit excellent electro-catalytic activity for hydrogen production in alkaline water electrolysis. Especially, Ni0.4Co0.6@N-Cs800-b catalyst prepared at 800 °C under an argon flow exhibited the best electrocatalytic performance towards HER. The high HER activity of the Ni0.4Co0.6@N-Cs800-b modified electrode is related to the appropriate nickel-cobalt metal ratio with high crystallinity, complete and homogeneous carbon layers outside of the nickel-cobalt with high conductivity and the synergistic effect of nickel-cobalt alloys that also accelerate electron transfer process.

  19. Shape evolution of new-phased lepidocrocite VOOH from single-shelled to double-shelled hollow nanospheres on the basis of programmed reaction-temperature strategy.

    Science.gov (United States)

    Wu, Changzheng; Zhang, Xiaodong; Ning, Bo; Yang, Jinlong; Xie, Yi

    2009-07-06

    Solid templates have been long regarded as one of the most promising ways to achieve single-shelled hollow nanostructures; however, few effective methods for the construction of multishelled hollow objects from their solid template counterparts have been developed. We report here, for the first time, a novel and convenient route to synthesizing double-shelled hollow spheres from the solid templates via programming the reaction-temperature procedures. The programmed temperature strategy developed in this work then provides an essential and general access to multishelled hollow nanostructures based on the designed extension of single-shelled hollow objects, independent of their outside contours, such as tubes, hollow spheres, and cubes. Starting from the V(OH)(2)NH(2) solid templates, we show that the relationship between the hollowing rate and the reaction temperature obey the Van't Hoff rule and Arrhenius activation-energy equation, revealing that it is the chemical reaction rather than the diffusion process that guided the whole hollowing process, despite the fact that the coupled reaction/diffusion process is involved in the hollowing process. Using the double-shelled hollow spheres as the PCM (CaCl(2).6H(2)O) matrix grants much better thermal-storage stability than that for the nanoparticles counterpart, revealing that the designed nanostructures can give rise to significant improvements for the energy-saving performance in future "smart house" systems.

  20. Evolution of CMS Workload Management Towards Multicore Job Support

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Hernández, J. M. [Madrid, CIEMAT; Khan, F. A. [Quaid-i-Azam U.; Letts, J. [UC, San Diego; Majewski, K. [Fermilab; Rodrigues, A. M. [Fermilab; McCrea, A. [UC, San Diego; Vaandering, E. [Fermilab

    2015-12-23

    The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.

  1. Optimization Algorithms for Calculation of the Joint Design Point in Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1992-01-01

    In large structures it is often necessary to estimate the reliability of the system by use of parallel systems. Optimality criteria-based algorithms for calculation of the joint design point in a parallel system are described and efficient active set strategies are developed. Three possible...

  2. Progress in strategies for sequence diversity library creation for ...

    African Journals Online (AJOL)

    As the simplest technique of protein engineering, directed evolution has been ... An experiment of directed evolution comprises mutant libraries creation and ... evolution, sequence diversity creation, novel strategy, computational design, ...

  3. Distributed Memory Parallel Computing with SEAWAT

    Science.gov (United States)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources

  4. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  5. Spectral analysis of parallel incomplete factorizations with implicit pseudo­-overlap

    NARCIS (Netherlands)

    Magolu monga Made, Mardochée; Vorst, H.A. van der

    2000-01-01

    Two general parallel incomplete factorization strategies are investigated. The techniques may be interpreted as generalized domain decomposition methods. In contrast to classical domain decomposition methods, adjacent subdomains exchange data during the construction of the incomplete

  6. A Scalable Parallel PWTD-Accelerated SIE Solver for Analyzing Transient Scattering from Electrically Large Objects

    KAUST Repository

    Liu, Yang; Yucel, Abdulkadir; Bagci, Hakan; Michielssen, Eric

    2015-01-01

    of processors by leveraging two mechanisms: (i) a hierarchical parallelization strategy to evenly distribute the computation and memory loads at all levels of the PWTD tree among processors, and (ii) a novel asynchronous communication scheme to reduce the cost

  7. Chromosomal Evolution in Chiroptera

    OpenAIRE

    Sotero-Caio, Cibele G.; Baker, Robert J.; Volleth, Marianne

    2017-01-01

    Chiroptera is the second largest order among mammals, with over 1300 species in 21 extant families. The group is extremely diverse in several aspects of its natural history, including dietary strategies, ecology, behavior and morphology. Bat genomes show ample chromosome diversity (from 2n = 14 to 62). As with other mammalian orders, Chiroptera is characterized by clades with low, moderate and extreme chromosomal change. In this article, we will discuss trends of karyotypic evolution within d...

  8. Musical emotions: Functions, origins, evolution

    Science.gov (United States)

    Perlovsky, Leonid

    2010-03-01

    Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in

  9. Advanced Material Strategies for Next-Generation Additive Manufacturing

    Science.gov (United States)

    Chang, Jinke; He, Jiankang; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen

    2018-01-01

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing. PMID:29361754

  10. Advanced Material Strategies for Next-Generation Additive Manufacturing.

    Science.gov (United States)

    Chang, Jinke; He, Jiankang; Mao, Mao; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen; Chua, Chee-Kai; Zhao, Xin

    2018-01-22

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing.

  11. Advanced Material Strategies for Next-Generation Additive Manufacturing

    Directory of Open Access Journals (Sweden)

    Jinke Chang

    2018-01-01

    Full Text Available Additive manufacturing (AM has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing.

  12. Stellar evolution

    CERN Document Server

    Meadows, A J

    2013-01-01

    Stellar Evolution, Second Edition covers the significant advances in the understanding of birth, life, and death of stars.This book is divided into nine chapters and begins with a description of the characteristics of stars according to their brightness, distance, size, mass, age, and chemical composition. The next chapters deal with the families, structure, and birth of stars. These topics are followed by discussions of the chemical composition and the evolution of main-sequence stars. A chapter focuses on the unique features of the sun as a star, including its evolution, magnetic fields, act

  13. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  14. GPU Parallel Bundle Block Adjustment

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-09-01

    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  15. A tandem parallel plate analyzer

    International Nuclear Information System (INIS)

    Hamada, Y.; Fujisawa, A.; Iguchi, H.; Nishizawa, A.; Kawasumi, Y.

    1996-11-01

    By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)

  16. High-speed parallel counter

    International Nuclear Information System (INIS)

    Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.

    1985-01-01

    This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

  17. An anthropologist in parallel structure

    Directory of Open Access Journals (Sweden)

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  18. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  19. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  20. Wakefield calculations on parallel computers

    International Nuclear Information System (INIS)

    Schoessow, P.

    1990-01-01

    The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs