WorldWideScience

Sample records for parallel evolution strategy

  1. Kinetic-Monte-Carlo-Based Parallel Evolution Simulation Algorithm of Dust Particles

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available The evolution simulation of dust particles provides an important way to analyze the impact of dust on the environment. KMC-based parallel algorithm is proposed to simulate the evolution of dust particles. In the parallel evolution simulation algorithm of dust particles, data distribution way and communication optimizing strategy are raised to balance the load of every process and reduce the communication expense among processes. The experimental results show that the simulation of diffusion, sediment, and resuspension of dust particles in virtual campus is realized and the simulation time is shortened by parallel algorithm, which makes up for the shortage of serial computing and makes the simulation of large-scale virtual environment possible.

  2. Contemporary evolution strategies

    CERN Document Server

    Bäck, Thomas; Krause, Peter

    2013-01-01

    Evolution strategies have more than 50 years of history in the field of evolutionary computation. Since the early 1990s, many algorithmic variations of evolution strategies have been developed, characterized by the fact that they use the so-called derandomization concept for strategy parameter adaptation. Most importantly, the covariance matrix adaptation strategy (CMA-ES) and its successors are the key representatives of this group of contemporary evolution strategies. This book provides an overview of the key algorithm developments between 1990 and 2012, including brief descriptions of the a

  3. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  4. Mixed-time parallel evolution in multiple quantum NMR experiments: sensitivity and resolution enhancement in heteronuclear NMR

    International Nuclear Information System (INIS)

    Ying Jinfa; Chill, Jordan H.; Louis, John M.; Bax, Ad

    2007-01-01

    A new strategy is demonstrated that simultaneously enhances sensitivity and resolution in three- or higher-dimensional heteronuclear multiple quantum NMR experiments. The approach, referred to as mixed-time parallel evolution (MT-PARE), utilizes evolution of chemical shifts of the spins participating in the multiple quantum coherence in parallel, thereby reducing signal losses relative to sequential evolution. The signal in a given PARE dimension, t 1 , is of a non-decaying constant-time nature for a duration that depends on the length of t 2 , and vice versa, prior to the onset of conventional exponential decay. Line shape simulations for the 1 H- 15 N PARE indicate that this strategy significantly enhances both sensitivity and resolution in the indirect 1 H dimension, and that the unusual signal decay profile results in acceptable line shapes. Incorporation of the MT-PARE approach into a 3D HMQC-NOESY experiment for measurement of H N -H N NOEs in KcsA in SDS micelles at 50 o C was found to increase the experimental sensitivity by a factor of 1.7±0.3 with a concomitant resolution increase in the indirectly detected 1 H dimension. The method is also demonstrated for a situation in which homonuclear 13 C- 13 C decoupling is required while measuring weak H3'-2'OH NOEs in an RNA oligomer

  5. Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study

    Directory of Open Access Journals (Sweden)

    Hari Radhakrishnan

    2015-01-01

    Full Text Available This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were done using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.

  6. Parallel vs. Convergent Evolution in Domestication and Diversification of Crops in the Americas

    Directory of Open Access Journals (Sweden)

    Barbara Pickersgill

    2018-05-01

    Full Text Available Domestication involves changes in various traits of the phenotype in response to human selection. Diversification may accompany or follow domestication, and results in variants within the crop adapted to different uses by humans or different agronomic conditions. Similar domestication and diversification traits may be shared by closely related species (parallel evolution or by distantly related species (convergent evolution. Many of these traits are produced by complex genetic networks or long biosynthetic pathways that are extensively conserved even in distantly related species. Similar phenotypic changes in different species may be controlled by homologous genes (parallel evolution at the genetic level or non-homologous genes (convergent evolution at the genetic level. It has been suggested that parallel evolution may be more frequent among closely related species, or among diversification rather than domestication traits, or among traits produced by simple metabolic pathways. Crops domesticated in the Americas span a spectrum of genetic relatedness, have been domesticated for diverse purposes, and have responded to human selection by changes in many different traits, so provide examples of both parallel and convergent evolution at various levels. However, despite the current explosion in relevant information, data are still insufficient to provide quantitative or conclusive assessments of the relative roles of these two processes in domestication and diversification

  7. Molecular pathways to parallel evolution: I. Gene nexuses and their morphological correlates.

    Science.gov (United States)

    Zuckerkandl, E

    1994-12-01

    Aspects of the regulatory interactions among genes are probably as old as most genes are themselves. Correspondingly, similar predispositions to changes in such interactions must have existed for long evolutionary periods. Features of the structure and the evolution of the system of gene regulation furnish the background necessary for a molecular understanding of parallel evolution. Patently "unrelated" organs, such as the fat body of a fly and the liver of a mammal, can exhibit fractional homology, a fraction expected to become subject to quantitation. This also seems to hold for different organs in the same organism, such as wings and legs of a fly. In informational macromolecules, on the other hand, homology is indeed all or none. In the quite different case of organs, analogy is expected usually to represent attenuated homology. Many instances of putative convergence are likely to turn out to be predominantly parallel evolution, presumably including the case of the vertebrate and cephalopod eyes. Homology in morphological features reflects a similarity in networks of active genes. Similar nexuses of active genes can be established in cells of different embryological origins. Thus, parallel development can be considered a counterpart to parallel evolution. Specific macromolecular interactions leading to the regulation of the c-fos gene are given as an example of a "controller node" defined as a regulatory unit. Quantitative changes in gene control are distinguished from relational changes, and frequent parallelism in quantitative changes is noted in Drosophila enzymes. Evolutionary reversions in quantitative gene expression are also expected. The evolution of relational patterns is attributed to several distinct mechanisms, notably the shuffling of protein domains. The growth of such patterns may in part be brought about by a particular process of compensation for "controller gene diseases," a process that would spontaneously tend to lead to increased regulatory

  8. Academic training: From Evolution Theory to Parallel and Distributed Genetic Programming

    CERN Multimedia

    2007-01-01

    2006-2007 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 15, 16 March From 11:00 to 12:00 - Main Auditorium, bldg. 500 From Evolution Theory to Parallel and Distributed Genetic Programming F. FERNANDEZ DE VEGA / Univ. of Extremadura, SP Lecture No. 1: From Evolution Theory to Evolutionary Computation Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems, which are based to some degree on the evolution of biological life in the natural world. In this tutorial we will review the source of inspiration for this metaheuristic and its capability for solving problems. We will show the main flavours within the field, and different problems that have been successfully solved employing this kind of techniques. Lecture No. 2: Parallel and Distributed Genetic Programming The successful application of Genetic Programming (GP, one of the available Evolutionary Algorithms) to optimization problems has encouraged an ...

  9. Parallel Evolution of Sperm Hyper-Activation Ca2+ Channels.

    Science.gov (United States)

    Cooper, Jacob C; Phadnis, Nitin

    2017-07-01

    Sperm hyper-activation is a dramatic change in sperm behavior where mature sperm burst into a final sprint in the race to the egg. The mechanism of sperm hyper-activation in many metazoans, including humans, consists of a jolt of Ca2+ into the sperm flagellum via CatSper ion channels. Surprisingly, all nine CatSper genes have been independently lost in several animal lineages. In Drosophila, sperm hyper-activation is performed through the cooption of the polycystic kidney disease 2 (pkd2) Ca2+ channel. The parallels between CatSpers in primates and pkd2 in Drosophila provide a unique opportunity to examine the molecular evolution of the sperm hyper-activation machinery in two independent, nonhomologous calcium channels separated by > 500 million years of divergence. Here, we use a comprehensive phylogenomic approach to investigate the selective pressures on these sperm hyper-activation channels. First, we find that the entire CatSper complex evolves rapidly under recurrent positive selection in primates. Second, we find that pkd2 has parallel patterns of adaptive evolution in Drosophila. Third, we show that this adaptive evolution of pkd2 is driven by its role in sperm hyper-activation. These patterns of selection suggest that the evolution of the sperm hyper-activation machinery is driven by sexual conflict with antagonistic ligands that modulate channel activity. Together, our results add sperm hyper-activation channels to the class of fast evolving reproductive proteins and provide insights into the mechanisms used by the sexes to manipulate sperm behavior. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  10. New Parallel Algorithms for Landscape Evolution Model

    Science.gov (United States)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  11. Effective Strategies for Teaching Evolution: The Primary Evolution Project

    Science.gov (United States)

    Hatcher, Chris

    2015-01-01

    When Chris Hatcher joined the Primary Evolution Project team at the University of Reading, his goal was to find effective strategies to teach evolution in a way that keeps children engaged and enthused. Hatcher has collaborated with colleagues at the University's Institute of Education to break the evolution unit down into distinct topics and…

  12. Parallel electric fields in a simulation of magnetotail reconnection and plasmoid evolution

    International Nuclear Information System (INIS)

    Hesse, M.; Birn, J.

    1990-01-01

    Properties of the electric field component parallel to the magnetic field are investigate in a 3D MHD simulation of plasmoid formation and evolution in the magnetotail, in the presence of a net dawn-dusk magnetic field component. The spatial localization of E-parallel, and the concept of a diffusion zone and the role of E-parallel in accelerating electrons are discussed. A localization of the region of enhanced E-parallel in all space directions is found, with a strong concentration in the z direction. This region is identified as the diffusion zone, which plays a crucial role in reconnection theory through the local break-down of magnetic flux conservation. 12 refs

  13. Design strategies for irregularly adapting parallel applications

    International Nuclear Information System (INIS)

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Sing, Jaswinder Pal

    2000-01-01

    Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance of dynamically adapting computations. In this work, we examine two major classes of adaptive applications, under five competing programming methodologies and four leading parallel architectures. Results indicate that it is possible to achieve message-passing performance using shared-memory programming techniques by carefully following the same high level strategies. Adaptive applications have computational work loads and communication patterns which change unpredictably at runtime, requiring dynamic load balancing to achieve scalable performance on parallel machines. Efficient parallel implementations of such adaptive applications are therefore a challenging task. This work examines the implementation of two typical adaptive applications, Dynamic Remeshing and N-Body, across various programming paradigms and architectural platforms. We compare several critical factors of the parallel code development, including performance, programmability, scalability, algorithmic development, and portability

  14. Parallel Note-Taking: A Strategy for Effective Use of Webnotes

    Science.gov (United States)

    Pardini, Eleanor A.; Domizi, Denise P.; Forbes, Daniel A.; Pettis, Gretchen V.

    2005-01-01

    Many instructors supply online lecture notes but little attention has been given to how students can make the best use of this resource. Based on observations of student difficulties with these notes, a strategy called parallel note-taking was developed for using online notes. The strategy is a hybrid of research-proven strategies for effective…

  15. From evolution theory to parallel and distributed genetic

    CERN Multimedia

    CERN. Geneva

    2007-01-01

    Lecture #1: From Evolution Theory to Evolutionary Computation. Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems, which are based to some degree on the evolution of biological life in the natural world. In this tutorial we will review the source of inspiration for this metaheuristic and its capability for solving problems. We will show the main flavours within the field, and different problems that have been successfully solved employing this kind of techniques. Lecture #2: Parallel and Distributed Genetic Programming. The successful application of Genetic Programming (GP, one of the available Evolutionary Algorithms) to optimization problems has encouraged an increasing number of researchers to apply these techniques to a large set of problems. Given the difficulty of some problems, much effort has been applied to improving the efficiency of GP during the last few years. Among the available proposals,...

  16. Strategy intervention for the evolution of fairness.

    Directory of Open Access Journals (Sweden)

    Yanling Zhang

    Full Text Available The 'irrational' preference for fairness has attracted increasing attention. Although previous studies have focused on the effects of spitefulness on the evolution of fairness, they did not consider non-monotonic rejections shown in behavioral experiments. In this paper, we introduce a non-monotonic rejection in an evolutionary model of the Ultimatum Game. We propose strategy intervention to study the evolution of fairness in general structured populations. By sequentially adding five strategies into the competition between a fair strategy and a selfish strategy, we arrive at the following conclusions. First, the evolution of fairness is inhibited by altruism, but it is promoted by spitefulness. Second, the non-monotonic rejection helps fairness overcome selfishness. Particularly for group-structured populations, we analytically investigate how fairness, selfishness, altruism, and spitefulness are affected by population size, mutation, and migration in the competition among seven strategies. Our results may provide important insights into understanding the evolutionary origin of fairness.

  17. Inertia in strategy switching transforms the strategy evolution.

    Science.gov (United States)

    Zhang, Yanling; Fu, Feng; Wu, Te; Xie, Guangming; Wang, Long

    2011-12-01

    A recent experimental study [Traulsen et al., Proc. Natl. Acad. Sci. 107, 2962 (2010)] shows that human strategy updating involves both direct payoff comparison and the cost of switching strategy, which is equivalent to inertia. However, it remains largely unclear how such a predisposed inertia affects 2 × 2 games in a well-mixed population of finite size. To address this issue, the "inertia bonus" (strategy switching cost) is added to the learner payoff in the Fermi process. We find how inertia quantitatively shapes the stationary distribution and that stochastic stability under inertia exhibits three regimes, with each covering seven regions in the plane spanned by two inertia parameters. We also obtain the extended "1/3" rule with inertia and the speed criterion with inertia; these two findings hold for a population above two. We illustrate the above results in the framework of the Prisoner's Dilemma game. As inertia varies, two intriguing stationary distributions emerge: the probability of coexistence state is maximized, or those of two full states are simultaneously peaked. Our results may provide useful insights into how the inertia of changing status quo acts on the strategy evolution and, in particular, the evolution of cooperation.

  18. Machine learning for evolution strategies

    CERN Document Server

    Kramer, Oliver

    2016-01-01

    This book introduces numerous algorithmic hybridizations between both worlds that show how machine learning can improve and support evolution strategies. The set of methods comprises covariance matrix estimation, meta-modeling of fitness and constraint functions, dimensionality reduction for search and visualization of high-dimensional optimization processes, and clustering-based niching. After giving an introduction to evolution strategies and machine learning, the book builds the bridge between both worlds with an algorithmic and experimental perspective. Experiments mostly employ a (1+1)-ES and are implemented in Python using the machine learning library scikit-learn. The examples are conducted on typical benchmark problems illustrating algorithmic concepts and their experimental behavior. The book closes with a discussion of related lines of research.

  19. Input-Parallel Output-Parallel Three-Level DC/DC Converters With Interleaving Control Strategy for Minimizing and Balancing Capacitor Ripple Currents

    DEFF Research Database (Denmark)

    Liu, Dong; Deng, Fujin; Gong, Zheng

    2017-01-01

    In this paper, the input-parallel output-parallel (IPOP) three-level (TL) DC/DC converters associated with the interleaving control strategy are proposed for minimizing and balancing the capacitor ripple currents. The proposed converters consist of two four-switch half-bridge three-level (HBTL) DC...

  20. Map-Based Power-Split Strategy Design with Predictive Performance Optimization for Parallel Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jixiang Fan

    2015-09-01

    Full Text Available In this paper, a map-based optimal energy management strategy is proposed to improve the consumption economy of a plug-in parallel hybrid electric vehicle. In the design of the maps, which provide both the torque split between engine and motor and the gear shift, not only the current vehicle speed and power demand, but also the optimality based on the predicted trajectory of vehicle dynamics are considered. To seek the optimality, the equivalent consumption, which trades off the fuel and electricity usages, is chosen as the cost function. Moreover, in order to decrease the model errors in the process of optimization conducted in the discrete time domain, the variational integrator is employed to calculate the evolution of the vehicle dynamics. To evaluate the proposed energy management strategy, the simulation results performed on a professional GT-Suit simulator are demonstrated and the comparison to a real-time optimization method is also given to show the advantage of the proposed off-line optimization approach.

  1. Development and application of efficient strategies for parallel magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Breuer, F.

    2006-07-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image

  2. Development and application of efficient strategies for parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Breuer, F.

    2006-01-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image artifacts

  3. New adaptive differencing strategy in the PENTRAN 3-d parallel Sn code

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Haghighat, A.

    1996-01-01

    It is known that three-dimensional (3-D) discrete ordinates (S n ) transport problems require an immense amount of storage and computational effort to solve. For this reason, parallel codes that offer a capability to completely decompose the angular, energy, and spatial domains among a distributed network of processors are required. One such code recently developed is PENTRAN, which iteratively solves 3-D multi-group, anisotropic S n problems on distributed-memory platforms, such as the IBM-SP2. Because large problems typically contain several different material zones with various properties, available differencing schemes should automatically adapt to the transport physics in each material zone. To minimize the memory and message-passing overhead required for massively parallel S n applications, available differencing schemes in an adaptive strategy should also offer reasonable accuracy and positivity, yet require only the zeroth spatial moment of the transport equation; differencing schemes based on higher spatial moments, in spite of their greater accuracy, require at least twice the amount of storage and communication cost for implementation in a massively parallel transport code. This paper discusses a new adaptive differencing strategy that uses increasingly accurate schemes with low parallel memory and communication overhead. This strategy, implemented in PENTRAN, includes a new scheme, exponential directional averaged (EDA) differencing

  4. Parallel Evolution of Copy-Number Variation across Continents in Drosophila melanogaster.

    Science.gov (United States)

    Schrider, Daniel R; Hahn, Matthew W; Begun, David J

    2016-05-01

    Genetic differentiation across populations that is maintained in the presence of gene flow is a hallmark of spatially varying selection. In Drosophila melanogaster, the latitudinal clines across the eastern coasts of Australia and North America appear to be examples of this type of selection, with recent studies showing that a substantial portion of the D. melanogaster genome exhibits allele frequency differentiation with respect to latitude on both continents. As of yet there has been no genome-wide examination of differentiated copy-number variants (CNVs) in these geographic regions, despite their potential importance for phenotypic variation in Drosophila and other taxa. Here, we present an analysis of geographic variation in CNVs in D. melanogaster. We also present the first genomic analysis of geographic variation for copy-number variation in the sister species, D. simulans, in order to investigate patterns of parallel evolution in these close relatives. In D. melanogaster we find hundreds of CNVs, many of which show parallel patterns of geographic variation on both continents, lending support to the idea that they are influenced by spatially varying selection. These findings support the idea that polymorphic CNVs contribute to local adaptation in D. melanogaster In contrast, we find very few CNVs in D. simulans that are geographically differentiated in parallel on both continents, consistent with earlier work suggesting that clinal patterns are weaker in this species. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Pursuing Darwin's curious parallel: Prospects for a science of cultural evolution.

    Science.gov (United States)

    Mesoudi, Alex

    2017-07-24

    In the past few decades, scholars from several disciplines have pursued the curious parallel noted by Darwin between the genetic evolution of species and the cultural evolution of beliefs, skills, knowledge, languages, institutions, and other forms of socially transmitted information. Here, I review current progress in the pursuit of an evolutionary science of culture that is grounded in both biological and evolutionary theory, but also treats culture as more than a proximate mechanism that is directly controlled by genes. Both genetic and cultural evolution can be described as systems of inherited variation that change over time in response to processes such as selection, migration, and drift. Appropriate differences between genetic and cultural change are taken seriously, such as the possibility in the latter of nonrandomly guided variation or transformation, blending inheritance, and one-to-many transmission. The foundation of cultural evolution was laid in the late 20th century with population-genetic style models of cultural microevolution, and the use of phylogenetic methods to reconstruct cultural macroevolution. Since then, there have been major efforts to understand the sociocognitive mechanisms underlying cumulative cultural evolution, the consequences of demography on cultural evolution, the empirical validity of assumed social learning biases, the relative role of transformative and selective processes, and the use of quantitative phylogenetic and multilevel selection models to understand past and present dynamics of society-level change. I conclude by highlighting the interdisciplinary challenges of studying cultural evolution, including its relation to the traditional social sciences and humanities.

  6. Parallel strategy for optimal learning in perceptrons

    International Nuclear Information System (INIS)

    Neirotti, J P

    2010-01-01

    We developed a parallel strategy for learning optimally specific realizable rules by perceptrons, in an online learning scenario. Our result is a generalization of the Caticha-Kinouchi (CK) algorithm developed for learning a perceptron with a synaptic vector drawn from a uniform distribution over the N-dimensional sphere, so called the typical case. Our method outperforms the CK algorithm in almost all possible situations, failing only in a denumerable set of cases. The algorithm is optimal in the sense that it saturates Bayesian bounds when it succeeds.

  7. Reliability–redundancy allocation problem considering optimal redundancy strategy using parallel genetic algorithm

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    To maximize the reliability of a system, the traditional reliability–redundancy allocation problem (RRAP) determines the component reliability and level of redundancy for each subsystem. This paper proposes an advanced RRAP that also considers the optimal redundancy strategy, either active or cold standby. In addition, new examples are presented for it. Furthermore, the exact reliability function for a cold standby redundant subsystem with an imperfect detector/switch is suggested, and is expected to replace the previous approximating model that has been used in most related studies. A parallel genetic algorithm for solving the RRAP as a mixed-integer nonlinear programming model is presented, and its performance is compared with those of previous studies by using numerical examples on three benchmark problems. - Highlights: • Optimal strategy is proposed to solve reliability redundancy allocation problem. • The redundancy strategy uses parallel genetic algorithm. • Improved reliability function for a cold standby subsystem is suggested. • Proposed redundancy strategy enhances the system reliability.

  8. Research on Control Strategy of Complex Systems through VSC-HVDC Grid Parallel Device

    Directory of Open Access Journals (Sweden)

    Xue Mei-Juan

    2014-07-01

    Full Text Available After the completion of grid parallel, the device can turn to be UPFC, STATCOM, SSSC, research on the conversion circuit and transform method by corresponding switching operation. Accomplish the grid parallel and comprehensive control of the tie-line and stable operation and control functions of grid after parallel. Defines the function select operation switch matrix and grid parallel system branch variable, forming a switch matrix to achieve corresponding function of the composite system. Formed a criterion of the selection means to choose control strategy according to the switch matrix, to accomplish corresponding function. Put the grid parallel, STATCOM, SSSC and UPFC together as a system, improve the stable operation and flexible control of the power system.

  9. A Parallel Strategy for Convolutional Neural Network Based on Heterogeneous Cluster for Mobile Information System

    Directory of Open Access Journals (Sweden)

    Jilin Zhang

    2017-01-01

    Full Text Available With the development of the mobile systems, we gain a lot of benefits and convenience by leveraging mobile devices; at the same time, the information gathered by smartphones, such as location and environment, is also valuable for business to provide more intelligent services for customers. More and more machine learning methods have been used in the field of mobile information systems to study user behavior and classify usage patterns, especially convolutional neural network. With the increasing of model training parameters and data scale, the traditional single machine training method cannot meet the requirements of time complexity in practical application scenarios. The current training framework often uses simple data parallel or model parallel method to speed up the training process, which is why heterogeneous computing resources have not been fully utilized. To solve these problems, our paper proposes a delay synchronization convolutional neural network parallel strategy, which leverages the heterogeneous system. The strategy is based on both synchronous parallel and asynchronous parallel approaches; the model training process can reduce the dependence on the heterogeneous architecture in the premise of ensuring the model convergence, so the convolution neural network framework is more adaptive to different heterogeneous system environments. The experimental results show that the proposed delay synchronization strategy can achieve at least three times the speedup compared to the traditional data parallelism.

  10. The role of Bh4 in parallel evolution of hull colour in domesticated and weedy rice.

    Science.gov (United States)

    Vigueira, C C; Li, W; Olsen, K M

    2013-08-01

    The two independent domestication events in the genus Oryza that led to African and Asian rice offer an extremely useful system for studying the genetic basis of parallel evolution. This system is also characterized by parallel de-domestication events, with two genetically distinct weedy rice biotypes in the US derived from the Asian domesticate. One important trait that has been altered by rice domestication and de-domestication is hull colour. The wild progenitors of the two cultivated rice species have predominantly black-coloured hulls, as does one of the two U.S. weed biotypes; both cultivated species and one of the US weedy biotypes are characterized by straw-coloured hulls. Using Black hull 4 (Bh4) as a hull colour candidate gene, we examined DNA sequence variation at this locus to study the parallel evolution of hull colour variation in the domesticated and weedy rice system. We find that independent Bh4-coding mutations have arisen in African and Asian rice that are correlated with the straw hull phenotype, suggesting that the same gene is responsible for parallel trait evolution. For the U.S. weeds, Bh4 haplotype sequences support current hypotheses on the phylogenetic relationship between the two biotypes and domesticated Asian rice; straw hull weeds are most similar to indica crops, and black hull weeds are most similar to aus crops. Tests for selection indicate that Asian crops and straw hull weeds deviate from neutrality at this gene, suggesting possible selection on Bh4 during both rice domestication and de-domestication. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.

  11. An effective approach to reducing strategy space for maintenance optimisation of multistate series–parallel systems

    International Nuclear Information System (INIS)

    Zhou, Yifan; Lin, Tian Ran; Sun, Yong; Bian, Yangqing; Ma, Lin

    2015-01-01

    Maintenance optimisation of series–parallel systems is a research topic of practical significance. Nevertheless, a cost-effective maintenance strategy is difficult to obtain due to the large strategy space for maintenance optimisation of such systems. The heuristic algorithm is often employed to deal with this problem. However, the solution obtained by the heuristic algorithm is not always the global optimum and the algorithm itself can be very time consuming. An alternative method based on linear programming is thus developed in this paper to overcome such difficulties by reducing strategy space of maintenance optimisation. A theoretical proof is provided in the paper to verify that the proposed method is at least as effective as the existing methods for strategy space reduction. Numerical examples for maintenance optimisation of series–parallel systems having multistate components and considering both economic dependence among components and multiple-level imperfect maintenance are also presented. The simulation results confirm that the proposed method is more effective than the existing methods in removing inappropriate maintenance strategies of multistate series–parallel systems. - Highlights: • A new method using linear programming is developed to reduce the strategy space. • The effectiveness of the new method for strategy reduction is theoretically proved. • Imperfect maintenance and economic dependence are considered during optimisation

  12. Parsing parallel evolution: ecological divergence and differential gene expression in the adaptive radiations of thick-lipped Midas cichlid fishes from Nicaragua.

    Science.gov (United States)

    Manousaki, Tereza; Hull, Pincelli M; Kusche, Henrik; Machado-Schiaffino, Gonzalo; Franchini, Paolo; Harrod, Chris; Elmer, Kathryn R; Meyer, Axel

    2013-02-01

    The study of parallel evolution facilitates the discovery of common rules of diversification. Here, we examine the repeated evolution of thick lips in Midas cichlid fishes (the Amphilophus citrinellus species complex)-from two Great Lakes and two crater lakes in Nicaragua-to assess whether similar changes in ecology, phenotypic trophic traits and gene expression accompany parallel trait evolution. Using next-generation sequencing technology, we characterize transcriptome-wide differential gene expression in the lips of wild-caught sympatric thick- and thin-lipped cichlids from all four instances of repeated thick-lip evolution. Six genes (apolipoprotein D, myelin-associated glycoprotein precursor, four-and-a-half LIM domain protein 2, calpain-9, GTPase IMAP family member 8-like and one hypothetical protein) are significantly underexpressed in the thick-lipped morph across all four lakes. However, other aspects of lips' gene expression in sympatric morphs differ in a lake-specific pattern, including the magnitude of differentially expressed genes (97-510). Generally, fewer genes are differentially expressed among morphs in the younger crater lakes than in those from the older Great Lakes. Body shape, lower pharyngeal jaw size and shape, and stable isotopes (δ(13)C and δ(15)N) differ between all sympatric morphs, with the greatest differentiation in the Great Lake Nicaragua. Some ecological traits evolve in parallel (those related to foraging ecology; e.g. lip size, body and head shape) but others, somewhat surprisingly, do not (those related to diet and food processing; e.g. jaw size and shape, stable isotopes). Taken together, this case of parallelism among thick- and thin-lipped cichlids shows a mosaic pattern of parallel and nonparallel evolution. © 2012 Blackwell Publishing Ltd.

  13. A novel harmonic current sharing control strategy for parallel-connected inverters

    DEFF Research Database (Denmark)

    Guan, Yajuan; Guerrero, Josep M.; Savaghebi, Mehdi

    2017-01-01

    A novel control strategy which enables proportional linear and nonlinear loads sharing among paralleled inverters and voltage harmonic suppression is proposed in this paper. The proposed method is based on the autonomous currents sharing controller (ACSC) instead of conventional power droop control...... to provide fast transient response, decoupling control and large stability margin. The current components at different sequences and orders are decomposed by a multi-second-order generalized integrator-based frequency locked loop (MSOGI-FLL). A harmonic-orthogonal-virtual-resistances controller (HOVR......) is used to proportionally share current components at different sequences and orders independently among the paralleled inverters. Proportional resonance controllers tuned at selected frequencies are used to suppress voltage harmonics. Simulations based on two 2.2 kW paralleled three-phase inverters...

  14. Molecular bases for parallel evolution of translucent bracts in an alpine "glasshouse" plant Rheum alexandrae (Polygonaceae)

    Czech Academy of Sciences Publication Activity Database

    Liu, B. B.; Opgenoorth, L.; Miehe, G.; Zhang, D.-Y.; Wan, D.-S.; Zhao, C.-M.; Jia, Dong-Rui; Liu, J.-Q.

    2013-01-01

    Roč. 51, č. 2 (2013), s. 134-141 ISSN 1674-4918 Institutional support: RVO:67985939 Keywords : cDNA-AFLPs * parallel evolution * adaptations, mutations, diversity Subject RIV: EF - Botanics Impact factor: 1.648, year: 2013

  15. Stochastic resonance and the evolution of Daphnia foraging strategy

    International Nuclear Information System (INIS)

    Dees, Nathan D; Bahar, Sonya; Moss, Frank

    2008-01-01

    Search strategies are currently of great interest, with reports on foraging ranging from albatrosses and spider monkeys to microzooplankton. Here, we investigate the role of noise in optimizing search strategies. We focus on the zooplankton Daphnia, which move in successive sequences consisting of a hop, a pause and a turn through an angle. Recent experiments have shown that their turning angle distributions (TADs) and underlying noise intensities are similar across species and age groups, suggesting an evolutionary origin of this internal noise. We explore this hypothesis further with a digital simulation (EVO) based solely on the three central Darwinian themes: inheritability, variability and survivability. Separate simulations utilizing stochastic resonance (SR) indicate that foraging success, and hence fitness, is maximized at an optimum TAD noise intensity, which is represented by the distribution's characteristic width, σ. In both the EVO and SR simulations, foraging success is the criterion, and the results are the predicted characteristic widths of the TADs that maximize success. Our results are twofold: (1) the evolving characteristic widths achieve stasis after many generations; (2) as a hop length parameter is changed, variations in the evolved widths generated by EVO parallel those predicted by SR. These findings provide support for the hypotheses that (1) σ is an evolved quantity and that (2) SR plays a role in evolution. (communication)

  16. Parallel evolution of mound-building and grass-feeding in Australian nasute termites.

    Science.gov (United States)

    Arab, Daej A; Namyatova, Anna; Evans, Theodore A; Cameron, Stephen L; Yeates, David K; Ho, Simon Y W; Lo, Nathan

    2017-02-01

    Termite mounds built by representatives of the family Termitidae are among the most spectacular constructions in the animal kingdom, reaching 6-8 m in height and housing millions of individuals. Although functional aspects of these structures are well studied, their evolutionary origins remain poorly understood. Australian representatives of the termitid subfamily Nasutitermitinae display a wide variety of nesting habits, making them an ideal group for investigating the evolution of mound building. Because they feed on a variety of substrates, they also provide an opportunity to illuminate the evolution of termite diets. Here, we investigate the evolution of termitid mound building and diet, through a comprehensive molecular phylogenetic analysis of Australian Nasutitermitinae. Molecular dating analysis indicates that the subfamily has colonized Australia on three occasions over the past approximately 20 Myr. Ancestral-state reconstruction showed that mound building arose on multiple occasions and from diverse ancestral nesting habits, including arboreal and wood or soil nesting. Grass feeding appears to have evolved from wood feeding via ancestors that fed on both wood and leaf litter. Our results underscore the adaptability of termites to ancient environmental change, and provide novel examples of parallel evolution of extended phenotypes. © 2017 The Author(s).

  17. Pursuing Darwin’s curious parallel: Prospects for a science of cultural evolution

    Science.gov (United States)

    2017-01-01

    In the past few decades, scholars from several disciplines have pursued the curious parallel noted by Darwin between the genetic evolution of species and the cultural evolution of beliefs, skills, knowledge, languages, institutions, and other forms of socially transmitted information. Here, I review current progress in the pursuit of an evolutionary science of culture that is grounded in both biological and evolutionary theory, but also treats culture as more than a proximate mechanism that is directly controlled by genes. Both genetic and cultural evolution can be described as systems of inherited variation that change over time in response to processes such as selection, migration, and drift. Appropriate differences between genetic and cultural change are taken seriously, such as the possibility in the latter of nonrandomly guided variation or transformation, blending inheritance, and one-to-many transmission. The foundation of cultural evolution was laid in the late 20th century with population-genetic style models of cultural microevolution, and the use of phylogenetic methods to reconstruct cultural macroevolution. Since then, there have been major efforts to understand the sociocognitive mechanisms underlying cumulative cultural evolution, the consequences of demography on cultural evolution, the empirical validity of assumed social learning biases, the relative role of transformative and selective processes, and the use of quantitative phylogenetic and multilevel selection models to understand past and present dynamics of society-level change. I conclude by highlighting the interdisciplinary challenges of studying cultural evolution, including its relation to the traditional social sciences and humanities. PMID:28739929

  18. Convergent, Parallel and Correlated Evolution of Trophic Morphologies in the Subfamily Schizothoracinae from the Qinghai-Tibetan Plateau

    Science.gov (United States)

    Qi, Delin; Chao, Yan; Guo, Songchang; Zhao, Lanying; Li, Taiping; Wei, Fulei; Zhao, Xinquan

    2012-01-01

    Schizothoracine fishes distributed in the water system of the Qinghai-Tibetan plateau (QTP) and adjacent areas are characterized by being highly adaptive to the cold and hypoxic environment of the plateau, as well as by a high degree of diversity in trophic morphology due to resource polymorphisms. Although convergent and parallel evolution are prevalent in the organisms of the QTP, it remains unknown whether similar evolutionary patterns have occurred in the schizothoracine fishes. Here, we constructed for the first time a tentative molecular phylogeny of the schizothoracine fishes based on the complete sequences of the cytochrome b gene. We employed this molecular phylogenetic framework to examine the evolution of trophic morphologies. We used Pagel's maximum likelihood method to estimate the evolutionary associations of trophic morphologies and food resource use. Our results showed that the molecular and published morphological phylogenies of Schizothoracinae are partially incongruent with respect to some intergeneric relationships. The phylogenetic results revealed that four character states of five trophic morphologies and of food resource use evolved at least twice during the diversification of the subfamily. State transitions are the result of evolutionary patterns including either convergence or parallelism or both. Furthermore, our analyses indicate that some characters of trophic morphologies in the Schizothoracinae have undergone correlated evolution, which are somewhat correlated with different food resource uses. Collectively, our results reveal new examples of convergent and parallel evolution in the organisms of the QTP. The adaptation to different trophic niches through the modification of trophic morphologies and feeding behaviour as found in the schizothoracine fishes may account for the formation and maintenance of the high degree of diversity and radiations in fish communities endemic to QTP. PMID:22470515

  19. Mixed integer evolution strategies for parameter optimization.

    Science.gov (United States)

    Li, Rui; Emmerich, Michael T M; Eggermont, Jeroen; Bäck, Thomas; Schütz, M; Dijkstra, J; Reiber, J H C

    2013-01-01

    Evolution strategies (ESs) are powerful probabilistic search and optimization algorithms gleaned from biological evolution theory. They have been successfully applied to a wide range of real world applications. The modern ESs are mainly designed for solving continuous parameter optimization problems. Their ability to adapt the parameters of the multivariate normal distribution used for mutation during the optimization run makes them well suited for this domain. In this article we describe and study mixed integer evolution strategies (MIES), which are natural extensions of ES for mixed integer optimization problems. MIES can deal with parameter vectors consisting not only of continuous variables but also with nominal discrete and integer variables. Following the design principles of the canonical evolution strategies, they use specialized mutation operators tailored for the aforementioned mixed parameter classes. For each type of variable, the choice of mutation operators is governed by a natural metric for this variable type, maximal entropy, and symmetry considerations. All distributions used for mutation can be controlled in their shape by means of scaling parameters, allowing self-adaptation to be implemented. After introducing and motivating the conceptual design of the MIES, we study the optimality of the self-adaptation of step sizes and mutation rates on a generalized (weighted) sphere model. Moreover, we prove global convergence of the MIES on a very general class of problems. The remainder of the article is devoted to performance studies on artificial landscapes (barrier functions and mixed integer NK landscapes), and a case study in the optimization of medical image analysis systems. In addition, we show that with proper constraint handling techniques, MIES can also be applied to classical mixed integer nonlinear programming problems.

  20. A Novel Reconfiguration Strategy of a Delta-Type Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Albert Lester Balmaceda-Santamaría

    2016-02-01

    Full Text Available This work introduces a novel reconfiguration strategy for a Delta-type parallel robot. The robot at hand, whose patent is pending, is equipped with an intermediate mechanism that allows for modifying the operational Cartesian workspace. Furthermore, singularities of the robot may be ameliorated owing to the inherent kinematic redundancy introduced by four actuable kinematic joints. The velocity and acceleration analyses of the parallel manipulator are carried out by resorting to reciprocal-screw theory. Finally, the manipulability of the new robot is investigated based on the computation of the condition number associated with the active Jacobian matrix, a well-known procedure. The results obtained show improved performance of the robot introduced when compared with results generated for another Delta-type robot.

  1. Proxy-equation paradigm: A strategy for massively parallel asynchronous computations

    Science.gov (United States)

    Mittal, Ankita; Girimaji, Sharath

    2017-09-01

    Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.

  2. Battery parameterisation based on differential evolution via a boundary evolution strategy

    DEFF Research Database (Denmark)

    Yang, Guangya

    2013-01-01

    the advances of evolutionary algorithms (EAs). Differential evolution (DE) is selected and modified to parameterise an equivalent circuit model of lithium-ion batteries. A boundary evolution strategy (BES) is developed and incorporated into the DE to update the parameter boundaries during the parameterisation......, as the equivalent circuit model is an abstract map of the battery electric characteristics, the determination of the possible ranges of parameters can be a challenging task. In this paper, an efficient yet easy to implement method is proposed to parameterise the equivalent circuit model of batteries utilising...

  3. Parallel Evolution of Copy-Number Variation across Continents in Drosophila melanogaster

    Science.gov (United States)

    Schrider, Daniel R.; Hahn, Matthew W.; Begun, David J.

    2016-01-01

    Genetic differentiation across populations that is maintained in the presence of gene flow is a hallmark of spatially varying selection. In Drosophila melanogaster, the latitudinal clines across the eastern coasts of Australia and North America appear to be examples of this type of selection, with recent studies showing that a substantial portion of the D. melanogaster genome exhibits allele frequency differentiation with respect to latitude on both continents. As of yet there has been no genome-wide examination of differentiated copy-number variants (CNVs) in these geographic regions, despite their potential importance for phenotypic variation in Drosophila and other taxa. Here, we present an analysis of geographic variation in CNVs in D. melanogaster. We also present the first genomic analysis of geographic variation for copy-number variation in the sister species, D. simulans, in order to investigate patterns of parallel evolution in these close relatives. In D. melanogaster we find hundreds of CNVs, many of which show parallel patterns of geographic variation on both continents, lending support to the idea that they are influenced by spatially varying selection. These findings support the idea that polymorphic CNVs contribute to local adaptation in D. melanogaster. In contrast, we find very few CNVs in D. simulans that are geographically differentiated in parallel on both continents, consistent with earlier work suggesting that clinal patterns are weaker in this species. PMID:26809315

  4. Optimal control applied to the control strategy of a parallel hybrid vehicle; Commande optimale appliquee a la strategie de commande d'un vehicule hybride parallele

    Energy Technology Data Exchange (ETDEWEB)

    Delprat, S.; Guerra, T.M. [Universite de Valenciennes et du Hainaut-Cambresis, LAMIH UMR CNRS 8530, 59 - Valenciennes (France); Rimaux, J. [PSA Peugeot Citroen, DRIA/SARA/EEES, 78 - Velizy Villacoublay (France); Paganelli, G. [Center for Automotive Research, Ohio (United States)

    2002-07-01

    Control strategies are algorithms that calculate the power repartition between the engine and the motor of an hybrid vehicle in order to minimize the fuel consumption and/or emissions. Some algorithms are devoted to real time application whereas others are designed for global optimization in stimulation. The last ones provide solutions which can be used to evaluate the performances of a given hybrid vehicle or a given real time control strategy. The control strategy problem is firstly written into the form of an optimization under constraints problem. A solution based on optimal control is proposed. Results are given for the European Normalized Cycle and a parallel single shaft hybrid vehicle built at the LAMIH (France). (authors)

  5. Decomposition and parallelization strategies for solving large-scale MDO problems

    Energy Technology Data Exchange (ETDEWEB)

    Grauer, M.; Eschenauer, H.A. [Research Center for Multidisciplinary Analyses and Applied Structural Optimization, FOMAAS, Univ. of Siegen (Germany)

    2007-07-01

    During previous years, structural optimization has been recognized as a useful tool within the discriptiones of engineering and economics. However, the optimization of large-scale systems or structures is impeded by an immense solution effort. This was the reason to start a joint research and development (R and D) project between the Institute of Mechanics and Control Engineering and the Information and Decision Sciences Institute within the Research Center for Multidisciplinary Analyses and Applied Structural Optimization (FOMAAS) on cluster computing for parallel and distributed solution of multidisciplinary optimization (MDO) problems based on the OpTiX-Workbench. Here the focus of attention will be put on coarsegrained parallelization and its implementation on clusters of workstations. A further point of emphasis was laid on the development of a parallel decomposition strategy called PARDEC, for the solution of very complex optimization problems which cannot be solved efficiently by sequential integrated optimization. The use of the OptiX-Workbench together with the FEM ground water simulation system FEFLOW is shown for a special water management problem. (orig.)

  6. Evolution of quantum and classical strategies on networks by group interactions

    International Nuclear Information System (INIS)

    Li Qiang; Chen Minyou; Iqbal, Azhar; Abbott, Derek

    2012-01-01

    In this paper, quantum strategies are introduced within evolutionary games in order to investigate the evolution of quantum and classical strategies on networks in the public goods game. Comparing the results of evolution on a scale-free network and a square lattice, we find that a quantum strategy outperforms the classical strategies, regardless of the network. Moreover, a quantum strategy dominates the population earlier in group interactions than it does in pairwise interactions. In particular, if the hub node in a scale-free network is occupied by a cooperator initially, the strategy of cooperation will prevail in the population. However, in other situations, a quantum strategy can defeat the classical ones and finally becomes the dominant strategy in the population. (paper)

  7. New strategy for eliminating zero-sequence circulating current between parallel operating three-level NPC voltage source inverters

    DEFF Research Database (Denmark)

    Li, Kai; Dong, Zhenhua; Wang, Xiaodong

    2018-01-01

    buses, that are operating in parallel. First, an equivalent model of ZSCC in a three-phase three-level NPC inverter paralleled system is developed. Second, on the basis of the analysis of the excitation source of ZSCCs, i.e., the difference in common mode voltages (CMVs) between paralleled inverters......, the ZCMV-PWM method is presented to reduce CMVs, and a simple electric circuit is adopted to control ZSCCs and neutral point potential. Finally, simulation and experiment are conducted to illustrate effectiveness of the proposed strategy. Results show that ZSCCs between paralleled inverters can...... be eliminated effectively under steady and dynamic states. Moreover, the proposed strategy exhibits the advantage of not requiring carrier synchronization. It can be utilized in inverters with different types of filter....

  8. Multilevel parallel strategy on Monte Carlo particle transport for the large-scale full-core pin-by-pin simulations

    International Nuclear Information System (INIS)

    Zhang, B.; Li, G.; Wang, W.; Shangguan, D.; Deng, L.

    2015-01-01

    This paper introduces the Strategy of multilevel hybrid parallelism of JCOGIN Infrastructure on Monte Carlo Particle Transport for the large-scale full-core pin-by-pin simulations. The particle parallelism, domain decomposition parallelism and MPI/OpenMP parallelism are designed and implemented. By the testing, JMCT presents the parallel scalability of JCOGIN, which reaches the parallel efficiency 80% on 120,000 cores for the pin-by-pin computation of the BEAVRS benchmark. (author)

  9. Agent Based Simulation of Group Emotions Evolution and Strategy Intervention in Extreme Events

    Directory of Open Access Journals (Sweden)

    Bo Li

    2014-01-01

    Full Text Available Agent based simulation method has become a prominent approach in computational modeling and analysis of public emergency management in social science research. The group emotions evolution, information diffusion, and collective behavior selection make extreme incidents studies a complex system problem, which requires new methods for incidents management and strategy evaluation. This paper studies the group emotion evolution and intervention strategy effectiveness using agent based simulation method. By employing a computational experimentation methodology, we construct the group emotion evolution as a complex system and test the effects of three strategies. In addition, the events-chain model is proposed to model the accumulation influence of the temporal successive events. Each strategy is examined through three simulation experiments, including two make-up scenarios and a real case study. We show how various strategies could impact the group emotion evolution in terms of the complex emergence and emotion accumulation influence in extreme events. This paper also provides an effective method of how to use agent-based simulation for the study of complex collective behavior evolution problem in extreme incidents, emergency, and security study domains.

  10. Research on Taxi Driver Strategy Game Evolution with Carpooling Detour

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2018-01-01

    Full Text Available For the problem of taxi carpooling detour, this paper studies driver strategy choice with carpooling detour. The model of taxi driver strategy evolution with carpooling detour is built based on prospect theory and evolution game theory. Driver stable strategies are analyzed under the conditions of complaint mechanism and absence of mechanism, respectively. The results show that passenger’s complaint mechanism can effectively decrease the phenomenon of driver refusing passengers with carpooling detour. When probability of passenger complaint reaches a certain level, the stable strategy of driver is to take carpooling detour passengers. Meanwhile, limiting detour distance and easing traffic congestion can decrease the possibility of refusing passengers. These conclusions have a certain guiding significance to formulating taxi policy.

  11. Blackboxing: social learning strategies and cultural evolution.

    Science.gov (United States)

    Heyes, Cecilia

    2016-05-05

    Social learning strategies (SLSs) enable humans, non-human animals, and artificial agents to make adaptive decisions aboutwhenthey should copy other agents, andwhothey should copy. Behavioural ecologists and economists have discovered an impressive range of SLSs, and explored their likely impact on behavioural efficiency and reproductive fitness while using the 'phenotypic gambit'; ignoring, or remaining deliberately agnostic about, the nature and origins of the cognitive processes that implement SLSs. Here I argue that this 'blackboxing' of SLSs is no longer a viable scientific strategy. It has contributed, through the 'social learning strategies tournament', to the premature conclusion that social learning is generally better than asocial learning, and to a deep puzzle about the relationship between SLSs and cultural evolution. The puzzle can be solved by recognizing that whereas most SLSs are 'planetary'--they depend on domain-general cognitive processes--some SLSs, found only in humans, are 'cook-like'--they depend on explicit, metacognitive rules, such ascopy digital natives. These metacognitive SLSs contribute to cultural evolution by fostering the development of processes that enhance the exclusivity, specificity, and accuracy of social learning. © 2016 The Author(s).

  12. Blackboxing: social learning strategies and cultural evolution

    Science.gov (United States)

    Heyes, Cecilia

    2016-01-01

    Social learning strategies (SLSs) enable humans, non-human animals, and artificial agents to make adaptive decisions about when they should copy other agents, and who they should copy. Behavioural ecologists and economists have discovered an impressive range of SLSs, and explored their likely impact on behavioural efficiency and reproductive fitness while using the ‘phenotypic gambit’; ignoring, or remaining deliberately agnostic about, the nature and origins of the cognitive processes that implement SLSs. Here I argue that this ‘blackboxing' of SLSs is no longer a viable scientific strategy. It has contributed, through the ‘social learning strategies tournament', to the premature conclusion that social learning is generally better than asocial learning, and to a deep puzzle about the relationship between SLSs and cultural evolution. The puzzle can be solved by recognizing that whereas most SLSs are ‘planetary'—they depend on domain-general cognitive processes—some SLSs, found only in humans, are ‘cook-like'—they depend on explicit, metacognitive rules, such as copy digital natives. These metacognitive SLSs contribute to cultural evolution by fostering the development of processes that enhance the exclusivity, specificity, and accuracy of social learning. PMID:27069046

  13. Kinematic Identification of Parallel Mechanisms by a Divide and Conquer Strategy

    DEFF Research Database (Denmark)

    Durango, Sebastian; Restrepo, David; Ruiz, Oscar

    2010-01-01

    using the inverse calibration method. The identification poses are selected optimizing the observability of the kinematic parameters from a Jacobian identification matrix. With respect to traditional identification methods the main advantages of the proposed Divide and Conquer kinematic identification...... strategy are: (i) reduction of the kinematic identification computational costs, (ii) improvement of the numerical efficiency of the kinematic identification algorithm and, (iii) improvement of the kinematic identification results. The contributions of the paper are: (i) The formalization of the inverse...... calibration method as the Divide and Conquer strategy for the kinematic identification of parallel symmetrical mechanisms and, (ii) a new kinematic identification protocol based on the Divide and Conquer strategy. As an application of the proposed kinematic identification protocol the identification...

  14. Darwin's concepts in a test tube: parallels between organismal and in vitro evolution.

    Science.gov (United States)

    Díaz Arenas, Carolina; Lehman, Niles

    2009-02-01

    The evolutionary process as imagined by Darwin 150 years ago is evident not only in nature but also in the manner in which naked nucleic acids and proteins experience the "survival of the fittest" in the test tube during in vitro evolution. This review highlights some of the most apparent evolutionary patterns, such as directional selection, purifying selection, disruptive selection, and iterative evolution (recurrence), and draws parallels between what happens in the wild with whole organisms and what happens in the lab with molecules. Advances in molecular selection techniques, particularly with catalytic RNAs and DNAs, have accelerated in the last 20 years to the point where soon any sort of complex differential hereditary event that one can ascribe to natural populations will be observable in molecular populations, and exploitation of these events can even lead to practical applications in some cases.

  15. Torque Split Strategy for Parallel Hybrid Electric Vehicles with an Integrated Starter Generator

    OpenAIRE

    Fu, Zhumu; Gao, Aiyun; Wang, Xiaohong; Song, Xiaona

    2014-01-01

    This paper presents a torque split strategy for parallel hybrid electric vehicles with an integrated starter generator (ISG-PHEV) by using fuzzy logic control. By combining the efficiency map and the optimum torque curve of the internal combustion engine (ICE) with the state of charge (SOC) of the batteries, the torque split strategy is designed, which manages the ICE within its peak efficiency region. Taking the quantified ICE torque, the quantified SOC of the batteries, and the quantified I...

  16. The strategy of parallel approaches in projects with unforeseeable uncertainty: the Manhattan case in retrospect

    OpenAIRE

    Sylvain Lenfle

    2011-01-01

    International audience; This paper discusses the literature on the management of projects with unforeseeable uncertainty. Recent work demonstrates that, when confronted with unforeseeable uncertainties, managers can adopt either a learning, trial-and-error-based strategy, or a parallel approach. In the latter, different solutions are developed in parallel and the best one is chosen when enough information becomes available. Studying the case of the Manhattan Project, which historically exempl...

  17. Rapid parallel evolution overcomes global honey bee parasite.

    Science.gov (United States)

    Oddie, Melissa; Büchler, Ralph; Dahle, Bjørn; Kovacic, Marin; Le Conte, Yves; Locke, Barbara; de Miranda, Joachim R; Mondet, Fanny; Neumann, Peter

    2018-05-16

    In eusocial insect colonies nestmates cooperate to combat parasites, a trait called social immunity. However, social immunity failed for Western honey bees (Apis mellifera) when the ectoparasitic mite Varroa destructor switched hosts from Eastern honey bees (Apis cerana). This mite has since become the most severe threat to A. mellifera world-wide. Despite this, some isolated A. mellifera populations are known to survive infestations by means of natural selection, largely by supressing mite reproduction, but the underlying mechanisms of this are poorly understood. Here, we show that a cost-effective social immunity mechanism has evolved rapidly and independently in four naturally V. destructor-surviving A. mellifera populations. Worker bees of all four 'surviving' populations uncapped/recapped worker brood cells more frequently and targeted mite-infested cells more effectively than workers in local susceptible colonies. Direct experiments confirmed the ability of uncapping/recapping to reduce mite reproductive success without sacrificing nestmates. Our results provide striking evidence that honey bees can overcome exotic parasites with simple qualitative and quantitative adaptive shifts in behaviour. Due to rapid, parallel evolution in four host populations this appears to be a key mechanism explaining survival of mite infested colonies.

  18. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    Science.gov (United States)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  19. NEW CO-EVOLUTION STRATEGIES OF THIRD MILLENNIUM; METHODOLOGICAL ASPECT

    Directory of Open Access Journals (Sweden)

    E. K. Bulygo

    2006-01-01

    Full Text Available The paper is devoted to an application of the co-evolution methodology to the social space. Principles of instability and non-linearity that are typical for contemporary natural science are used as a theoretical background of a new social methodology. Authors try to prove that the co-evolution strategy has a long pre-history in the ancient oriental philosophy and manifests itself in forms of modem culture

  20. Engine-start Control Strategy of P2 Parallel Hybrid Electric Vehicle

    Science.gov (United States)

    Xiangyang, Xu; Siqi, Zhao; Peng, Dong

    2017-12-01

    A smooth and fast engine-start process is important to parallel hybrid electric vehicles with an electric motor mounted in front of the transmission. However, there are some challenges during the engine-start control. Firstly, the electric motor must simultaneously provide a stable driving torque to ensure the drivability and a compensative torque to drag the engine before ignition. Secondly, engine-start time is a trade-off control objective because both fast start and smooth start have to be considered. To solve these problems, this paper first analyzed the resistance of the engine start process, and established a physic model in MATLAB/Simulink. Then a model-based coordinated control strategy among engine, motor and clutch was developed. Two basic control strategy during fast start and smooth start process were studied. Simulation results showed that the control objectives were realized by applying given control strategies, which can meet different requirement from the driver.

  1. A general parallelization strategy for random path based geostatistical simulation methods

    Science.gov (United States)

    Mariethoz, Grégoire

    2010-07-01

    The size of simulation grids used for numerical models has increased by many orders of magnitude in the past years, and this trend is likely to continue. Efficient pixel-based geostatistical simulation algorithms have been developed, but for very large grids and complex spatial models, the computational burden remains heavy. As cluster computers become widely available, using parallel strategies is a natural step for increasing the usable grid size and the complexity of the models. These strategies must profit from of the possibilities offered by machines with a large number of processors. On such machines, the bottleneck is often the communication time between processors. We present a strategy distributing grid nodes among all available processors while minimizing communication and latency times. It consists in centralizing the simulation on a master processor that calls other slave processors as if they were functions simulating one node every time. The key is to decouple the sending and the receiving operations to avoid synchronization. Centralization allows having a conflict management system ensuring that nodes being simulated simultaneously do not interfere in terms of neighborhood. The strategy is computationally efficient and is versatile enough to be applicable to all random path based simulation methods.

  2. Automatic Clustering Using FSDE-Forced Strategy Differential Evolution

    Science.gov (United States)

    Yasid, A.

    2018-01-01

    Clustering analysis is important in datamining for unsupervised data, cause no adequate prior knowledge. One of the important tasks is defining the number of clusters without user involvement that is known as automatic clustering. This study intends on acquiring cluster number automatically utilizing forced strategy differential evolution (AC-FSDE). Two mutation parameters, namely: constant parameter and variable parameter are employed to boost differential evolution performance. Four well-known benchmark datasets were used to evaluate the algorithm. Moreover, the result is compared with other state of the art automatic clustering methods. The experiment results evidence that AC-FSDE is better or competitive with other existing automatic clustering algorithm.

  3. Strategy evolution driven by switching probabilities in structured multi-agent systems

    Science.gov (United States)

    Zhang, Jianlei; Chen, Zengqiang; Li, Zhiqi

    2017-10-01

    Evolutionary mechanism driving the commonly seen cooperation among unrelated individuals is puzzling. Related models for evolutionary games on graphs traditionally assume that players imitate their successful neighbours with higher benefits. Notably, an implicit assumption here is that players are always able to acquire the required pay-off information. To relax this restrictive assumption, a contact-based model has been proposed, where switching probabilities between strategies drive the strategy evolution. However, the explicit and quantified relation between a player's switching probability for her strategies and the number of her neighbours remains unknown. This is especially a key point in heterogeneously structured system, where players may differ in the numbers of their neighbours. Focusing on this, here we present an augmented model by introducing an attenuation coefficient and evaluate its influence on the evolution dynamics. Results show that the individual influence on others is negatively correlated with the contact numbers specified by the network topologies. Results further provide the conditions under which the coexisting strategies can be calculated analytically.

  4. A new virtual-flux-vector based droop control strategy for parallel connected inverters in microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Qu, Yanqing

    2013-01-01

    Voltage and frequency droop method is commonly used in microgrids to achieve proper autonomous power sharing without rely on intercommunication systems. This paper proposes a new control strategy for parallel connected inverters in microgrid applications by drooping the flux instead of the invert...

  5. Silencing, positive selection and parallel evolution: busy history of primate cytochromes C.

    Science.gov (United States)

    Pierron, Denis; Opazo, Juan C; Heiske, Margit; Papper, Zack; Uddin, Monica; Chand, Gopi; Wildman, Derek E; Romero, Roberto; Goodman, Morris; Grossman, Lawrence I

    2011-01-01

    Cytochrome c (cyt c) participates in two crucial cellular processes, energy production and apoptosis, and unsurprisingly is a highly conserved protein. However, previous studies have reported for the primate lineage (i) loss of the paralogous testis isoform, (ii) an acceleration and then a deceleration of the amino acid replacement rate of the cyt c somatic isoform, and (iii) atypical biochemical behavior of human cyt c. To gain insight into the cause of these major evolutionary events, we have retraced the history of cyt c loci among primates. For testis cyt c, all primate sequences examined carry the same nonsense mutation, which suggests that silencing occurred before the primates diversified. For somatic cyt c, maximum parsimony, maximum likelihood, and Bayesian phylogenetic analyses yielded the same tree topology. The evolutionary analyses show that a fast accumulation of non-synonymous mutations (suggesting positive selection) occurred specifically on the anthropoid lineage root and then continued in parallel on the early catarrhini and platyrrhini stems. Analysis of evolutionary changes using the 3D structure suggests they are focused on the respiratory chain rather than on apoptosis or other cyt c functions. In agreement with previous biochemical studies, our results suggest that silencing of the cyt c testis isoform could be linked with the decrease of primate reproduction rate. Finally, the evolution of cyt c in the two sister anthropoid groups leads us to propose that somatic cyt c evolution may be related both to COX evolution and to the convergent brain and body mass enlargement in these two anthropoid clades.

  6. Silencing, positive selection and parallel evolution: busy history of primate cytochromes C.

    Directory of Open Access Journals (Sweden)

    Denis Pierron

    Full Text Available Cytochrome c (cyt c participates in two crucial cellular processes, energy production and apoptosis, and unsurprisingly is a highly conserved protein. However, previous studies have reported for the primate lineage (i loss of the paralogous testis isoform, (ii an acceleration and then a deceleration of the amino acid replacement rate of the cyt c somatic isoform, and (iii atypical biochemical behavior of human cyt c. To gain insight into the cause of these major evolutionary events, we have retraced the history of cyt c loci among primates. For testis cyt c, all primate sequences examined carry the same nonsense mutation, which suggests that silencing occurred before the primates diversified. For somatic cyt c, maximum parsimony, maximum likelihood, and Bayesian phylogenetic analyses yielded the same tree topology. The evolutionary analyses show that a fast accumulation of non-synonymous mutations (suggesting positive selection occurred specifically on the anthropoid lineage root and then continued in parallel on the early catarrhini and platyrrhini stems. Analysis of evolutionary changes using the 3D structure suggests they are focused on the respiratory chain rather than on apoptosis or other cyt c functions. In agreement with previous biochemical studies, our results suggest that silencing of the cyt c testis isoform could be linked with the decrease of primate reproduction rate. Finally, the evolution of cyt c in the two sister anthropoid groups leads us to propose that somatic cyt c evolution may be related both to COX evolution and to the convergent brain and body mass enlargement in these two anthropoid clades.

  7. a Predator-Prey Model Based on the Fully Parallel Cellular Automata

    Science.gov (United States)

    He, Mingfeng; Ruan, Hongbo; Yu, Changliang

    We presented a predator-prey lattice model containing moveable wolves and sheep, which are characterized by Penna double bit strings. Sexual reproduction and child-care strategies are considered. To implement this model in an efficient way, we build a fully parallel Cellular Automata based on a new definition of the neighborhood. We show the roles played by the initial densities of the populations, the mutation rate and the linear size of the lattice in the evolution of this model.

  8. Role of environmental variability in the evolution of life history strategies.

    Science.gov (United States)

    Hastings, A; Caswell, H

    1979-09-01

    We reexamine the role of environmental variability in the evolution of life history strategies. We show that normally distributed deviations in the quality of the environment should lead to normally distributed deviations in the logarithm of year-to-year survival probabilities, which leads to interesting consequences for the evolution of annual and perennial strategies and reproductive effort. We also examine the effects of using differing criteria to determine the outcome of selection. Some predictions of previous theory are reversed, allowing distinctions between r and K theory and a theory based on variability. However, these distinctions require information about both the environment and the selection process not required by current theory.

  9. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  10. Reliability optimization of series–parallel systems with mixed redundancy strategy in subsystems

    International Nuclear Information System (INIS)

    Abouei Ardakan, Mostafa; Zeinal Hamadani, Ali

    2014-01-01

    Traditionally in redundancy allocation problem (RAP), it is assumed that the redundant components are used based on a predefined active or standby strategies. Recently, some studies consider the situation that both active and standby strategies can be used in a specific system. However, these researches assume that the redundancy strategy for each subsystem can be either active or standby and determine the best strategy for these subsystems by using a proper mathematical model. As an extension to this assumption, a novel strategy, that is a combination of traditional active and standby strategies, is introduced. The new strategy is called mixed strategy which uses both active and cold-standby strategies in one subsystem simultaneously. Therefore, the problem is to determine the component type, redundancy level, number of active and cold-standby units for each subsystem in order to maximize the system reliability. To have a more practical model, the problem is formulated with imperfect switching of cold-standby redundant components and k-Erlang time-to-failure (TTF) distribution. As the optimization of RAP belongs to NP-hard class of problems, a genetic algorithm (GA) is developed. The new strategy and proposed GA are implemented on a well-known test problem in the literature which leads to interesting results. - Highlights: • In this paper the redundancy allocation problem (RAP) for a series–parallel system is considered. • Traditionally there are two main strategies for redundant component namely active and standby. • In this paper a new redundancy strategy which is called “Mixed” redundancy strategy is introduced. • Computational experiments demonstrate that implementing the new strategy lead to interesting results

  11. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  12. The Evolution of Diapsid Reproductive Strategy with Inferences about Extinct Taxa.

    Directory of Open Access Journals (Sweden)

    Jason R Moore

    Full Text Available Diapsids show an extremely wide range of reproductive strategies. Offspring may receive no parental care, care from only one sex, care from both parents, or care under more complex regimes. Young may vary from independent, super-precocial hatchlings to altricial neonates needing much care before leaving the nest. Parents can invest heavily in a few young, or less so in a larger number. Here we examine the evolution of these traits across a composite phylogeny spanning the extant diapsids and including the limited number of extinct taxa for which reproductive strategies can be well constrained. Generalized estimating equation(GEE-based phylogenetic comparative methods demonstrate the influences of body mass, parental care strategy and hatchling maturity on clutch volume across the diapsids. The influence of polygamous reproduction is not important despite a large sample size. Applying the results of these models to the dinosaurs supports the hypothesis of paternal care (male only in derived non-avian theropods, previously suggested based on simpler analyses. These data also suggest that sauropodomorphs did not care for their young. The evolution of parental-care occurs in an almost linear series of transitions. Paternal care rarely gives rise to other care strategies. Where hatchling condition changes, diapsids show an almost unidirectional tendency of evolution towards increased altriciality. Transitions to social monogamy from the ancestral state in diapsids, where both sexes are polygamous, are common. In contrast, once evolved, polygyny and polyandry are very evolutionarily stable. Polygyny and maternal care correlate, as do polyandry and paternal care. Ancestral-character estimation (ACE of these care strategies with the character transition likelihoods estimated from the original data gives good confidence at most important nodes. These analyses suggest that the basalmost diapsids had no parental care. Crocodilians independently evolved

  13. PARALLEL EVOLUTION OF QUASI-SEPARATRIX LAYERS AND ACTIVE REGION UPFLOWS

    Energy Technology Data Exchange (ETDEWEB)

    Mandrini, C. H.; Cristiani, G. D.; Nuevo, F. A.; Vásquez, A. M. [Instituto de Astronomía y Física del Espacio (IAFE), UBA-CONICET, CC. 67, Suc. 28 Buenos Aires, 1428 (Argentina); Baker, D.; Driel-Gesztelyi, L. van [UCL-Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey, RH5 6NT (United Kingdom); Démoulin, P.; Pick, M. [Observatoire de Paris, LESIA, UMR 8109 (CNRS), F-92195 Meudon Principal Cedex (France); Vargas Domínguez, S. [Observatorio Astronómico Nacional, Universidad Nacional de Colombia, Bogotá (Colombia)

    2015-08-10

    Persistent plasma upflows were observed with Hinode’s EUV Imaging Spectrometer (EIS) at the edges of active region (AR) 10978 as it crossed the solar disk. We analyze the evolution of the photospheric magnetic and velocity fields of the AR, model its coronal magnetic field, and compute the location of magnetic null-points and quasi-sepratrix layers (QSLs) searching for the origin of EIS upflows. Magnetic reconnection at the computed null points cannot explain all of the observed EIS upflow regions. However, EIS upflows and QSLs are found to evolve in parallel, both temporarily and spatially. Sections of two sets of QSLs, called outer and inner, are found associated to EIS upflow streams having different characteristics. The reconnection process in the outer QSLs is forced by a large-scale photospheric flow pattern, which is present in the AR for several days. We propose a scenario in which upflows are observed, provided that a large enough asymmetry in plasma pressure exists between the pre-reconnection loops and lasts as long as a photospheric forcing is at work. A similar mechanism operates in the inner QSLs; in this case, it is forced by the emergence and evolution of the bipoles between the two main AR polarities. Our findings provide strong support for the results from previous individual case studies investigating the role of magnetic reconnection at QSLs as the origin of the upflowing plasma. Furthermore, we propose that persistent reconnection along QSLs does not only drive the EIS upflows, but is also responsible for the continuous metric radio noise-storm observed in AR 10978 along its disk transit by the Nançay Radio Heliograph.

  14. Experimental evolution in biofilm populations

    Science.gov (United States)

    Steenackers, Hans P.; Parijs, Ilse; Foster, Kevin R.; Vanderleyden, Jozef

    2016-01-01

    Biofilms are a major form of microbial life in which cells form dense surface associated communities that can persist for many generations. The long-life of biofilm communities means that they can be strongly shaped by evolutionary processes. Here, we review the experimental study of evolution in biofilm communities. We first provide an overview of the different experimental models used to study biofilm evolution and their associated advantages and disadvantages. We then illustrate the vast amount of diversification observed during biofilm evolution, and we discuss (i) potential ecological and evolutionary processes behind the observed diversification, (ii) recent insights into the genetics of adaptive diversification, (iii) the striking degree of parallelism between evolution experiments and real-life biofilms and (iv) potential consequences of diversification. In the second part, we discuss the insights provided by evolution experiments in how biofilm growth and structure can promote cooperative phenotypes. Overall, our analysis points to an important role of biofilm diversification and cooperation in bacterial survival and productivity. Deeper understanding of both processes is of key importance to design improved antimicrobial strategies and diagnostic techniques. PMID:26895713

  15. Host-parasite coevolution can promote the evolution of seed banking as a bet-hedging strategy.

    Science.gov (United States)

    Verin, Mélissa; Tellier, Aurélien

    2018-04-20

    Seed (egg) banking is a common bet-hedging strategy maximizing the fitness of organisms facing environmental unpredictability by the delayed emergence of offspring. Yet, this condition often requires fast and drastic stochastic shifts between good and bad years. We hypothesize that the host seed banking strategy can evolve in response to coevolution with parasites because the coevolutionary cycles promote a gradually changing environment over longer times than seed persistence. We study the evolution of host germination fraction as a quantitative trait using both pairwise competition and multiple mutant competition methods, while the germination locus can be genetically linked or unlinked with the host locus under coevolution. In a gene-for-gene model of coevolution, hosts evolve a seed bank strategy under unstable coevolutionary cycles promoted by moderate to high costs of resistance or strong disease severity. Moreover, when assuming genetic linkage between coevolving and germination loci, the resistant genotype always evolves seed banking in contrast to susceptible hosts. Under a matching-allele interaction, both hosts' genotypes exhibit the same seed banking strategy irrespective of the genetic linkage between loci. We suggest host-parasite coevolution as an additional hypothesis for the evolution of seed banking as a temporal bet-hedging strategy. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.

  16. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  17. Battery parameterisation based on differential evolution via a boundary evolution strategy

    Science.gov (United States)

    Yang, Guangya

    2014-01-01

    Attention has been given to the battery modelling in the electric engineering field following the current development of renewable energy and electrification of transportation. The establishment of the equivalent circuit model of the battery requires data preparation and parameterisation. Besides, as the equivalent circuit model is an abstract map of the battery electric characteristics, the determination of the possible ranges of parameters can be a challenging task. In this paper, an efficient yet easy to implement method is proposed to parameterise the equivalent circuit model of batteries utilising the advances of evolutionary algorithms (EAs). Differential evolution (DE) is selected and modified to parameterise an equivalent circuit model of lithium-ion batteries. A boundary evolution strategy (BES) is developed and incorporated into the DE to update the parameter boundaries during the parameterisation. The method can parameterise the model without extensive data preparation. In addition, the approach can also estimate the initial SOC and the available capacity. The efficiency of the approach is verified through two battery packs, one is an 8-cell battery module and one from an electrical vehicle.

  18. Parallel evolution of TCP and B-class genes in Commelinaceae flower bilateral symmetry

    Directory of Open Access Journals (Sweden)

    Preston Jill C

    2012-03-01

    Full Text Available Abstract Background Flower bilateral symmetry (zygomorphy has evolved multiple times independently across angiosperms and is correlated with increased pollinator specialization and speciation rates. Functional and expression analyses in distantly related core eudicots and monocots implicate independent recruitment of class II TCP genes in the evolution of flower bilateral symmetry. Furthermore, available evidence suggests that monocot flower bilateral symmetry might also have evolved through changes in B-class homeotic MADS-box gene function. Methods In order to test the non-exclusive hypotheses that changes in TCP and B-class gene developmental function underlie flower symmetry evolution in the monocot family Commelinaceae, we compared expression patterns of teosinte branched1 (TB1-like, DEFICIENS (DEF-like, and GLOBOSA (GLO-like genes in morphologically distinct bilaterally symmetrical flowers of Commelina communis and Commelina dianthifolia, and radially symmetrical flowers of Tradescantia pallida. Results Expression data demonstrate that TB1-like genes are asymmetrically expressed in tepals of bilaterally symmetrical Commelina, but not radially symmetrical Tradescantia, flowers. Furthermore, DEF-like genes are expressed in showy inner tepals, staminodes and stamens of all three species, but not in the distinct outer tepal-like ventral inner tepals of C. communis. Conclusions Together with other studies, these data suggest parallel recruitment of TB1-like genes in the independent evolution of flower bilateral symmetry at early stages of Commelina flower development, and the later stage homeotic transformation of C. communis inner tepals into outer tepals through the loss of DEF-like gene expression.

  19. A Parallel Energy-Sharing Control Strategy for Fuel Cell Hybrid Vehicle

    Directory of Open Access Journals (Sweden)

    Nik Rumzi Nik Idris

    2011-08-01

    Full Text Available This paper presents a parallel energy-sharing control strategy for the application of fuel cell hybrid vehicles (FCHVs. The hybrid source discussed consists of a fuel cells (FCs generator and energy storage units (ESUs which composed by the battery and ultracapacitor (UC modules. A direct current (DC bus is used to interface between the energy sources and the electric vehicles (EV propulsion system (loads. Energy sources are connected to the DC bus using of power electronics converters. A total of six control loops are designed in the supervisory system in order to regulate the DC bus voltage, control of current flow and to monitor the state of charge (SOC of each energy storage device at the same time. Proportional plus integral (PI controllers are employed to regulate the output from each control loop referring to their reference signals. The proposed energy control system is simulated in MATLAB/Simulink environment. Results indicated that the proposed parallel energy-sharing control system is capable to provide a practical hybrid vehicle in respond to the vehicle traction response and avoids the FC and battery from overstressed at the same time.

  20. Risk evaluation mitigation strategies: the evolution of risk management policy.

    Science.gov (United States)

    Hollingsworth, Kristen; Toscani, Michael

    2013-04-01

    The United States Food and Drug Administration (FDA) has the primary regulatory responsibility to ensure that medications are safe and effective both prior to drug approval and while the medication is being actively marketed by manufacturers. The responsibility for safe medications prior to marketing was signed into law in 1938 under the Federal Food, Drug, and Cosmetic Act; however, a significant risk management evolution has taken place since 1938. Additional federal rules, entitled the Food and Drug Administration Amendments Act, were established in 2007 and extended the government's oversight through the addition of a Risk Evaluation and Mitigation Strategy (REMS) for certain drugs. REMS is a mandated strategy to manage a known or potentially serious risk associated with a medication or biological product. Reasons for this extension of oversight were driven primarily by the FDA's movement to ensure that patients and providers are better informed of drug therapies and their specific benefits and risks prior to initiation. This article provides an historical perspective of the evolution of medication risk management policy and includes a review of REMS programs, an assessment of the positive and negative aspects of REMS, and provides suggestions for planning and measuring outcomes. In particular, this publication presents an overview of the evolution of the REMS program and its implications.

  1. Improvement of remote monitoring on water quality in a subtropical reservoir by incorporating grammatical evolution with parallel genetic algorithms into satellite imagery.

    Science.gov (United States)

    Chen, Li; Tan, Chih-Hung; Kao, Shuh-Ji; Wang, Tai-Sheng

    2008-01-01

    Parallel GEGA was constructed by incorporating grammatical evolution (GE) into the parallel genetic algorithm (GA) to improve reservoir water quality monitoring based on remote sensing images. A cruise was conducted to ground-truth chlorophyll-a (Chl-a) concentration longitudinally along the Feitsui Reservoir, the primary water supply for Taipei City in Taiwan. Empirical functions with multiple spectral parameters from the Landsat 7 Enhanced Thematic Mapper (ETM+) data were constructed. The GE, an evolutionary automatic programming type system, automatically discovers complex nonlinear mathematical relationships among observed Chl-a concentrations and remote-sensed imageries. A GA was used afterward with GE to optimize the appropriate function type. Various parallel subpopulations were processed to enhance search efficiency during the optimization procedure with GA. Compared with a traditional linear multiple regression (LMR), the performance of parallel GEGA was found to be better than that of the traditional LMR model with lower estimating errors.

  2. A software for parameter optimization with Differential Evolution Entirely Parallel method

    Directory of Open Access Journals (Sweden)

    Konstantin Kozlov

    2016-08-01

    Full Text Available Summary. Differential Evolution Entirely Parallel (DEEP package is a software for finding unknown real and integer parameters in dynamical models of biological processes by minimizing one or even several objective functions that measure the deviation of model solution from data. Numerical solutions provided by the most efficient global optimization methods are often problem-specific and cannot be easily adapted to other tasks. In contrast, DEEP allows a user to describe both mathematical model and objective function in any programming language, such as R, Octave or Python and others. Being implemented in C, DEEP demonstrates as good performance as the top three methods from CEC-2014 (Competition on evolutionary computation benchmark and was successfully applied to several biological problems. Availability. DEEP method is an open source and free software distributed under the terms of GPL licence version 3. The sources are available at http://deepmethod.sourceforge.net/ and binary packages for Fedora GNU/Linux are provided for RPM package manager at https://build.opensuse.org/project/repositories/home:mackoel:compbio.

  3. Badlands: A parallel basin and landscape dynamics model

    Directory of Open Access Journals (Sweden)

    T. Salles

    2016-01-01

    Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.

  4. Novel Differential Current Control Strategy Based on a Modified Three-Level SVPWM for Two Parallel-Connected Inverters

    DEFF Research Database (Denmark)

    Zorig, Abdelmalik; Barkat, Said; Belkheiri, Mohammed

    2017-01-01

    Recently, parallel inverters have been investigated to provide multilevel characteristics besides their advantage to increase the power system capacity, reliability, and efficiency. However, the issue of differential currents imbalance remains a challenge in parallel inverter operation. The distr......Recently, parallel inverters have been investigated to provide multilevel characteristics besides their advantage to increase the power system capacity, reliability, and efficiency. However, the issue of differential currents imbalance remains a challenge in parallel inverter operation....... The distribution of switching vectors of the resulting multilevel topology has a certain degree of self-differential current balancing properties. Nevertheless, the method alone is not sufficient to maintain balanced differential current in practical applications. This paper proposes a closed-loop differential...... current control method by introducing a control variable adjusting the dwell time of the selected switching vectors and thus maintaining the differential currents balanced without affecting the overall system performance. The control strategy, including distribution of switching sequence, selection...

  5. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  6. Parallel Evolutionary Optimization of Multibody Systems with Application to Railway Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Eberhard, Peter [University of Erlangen-Nuremberg, Institute of Applied Mechanics (Germany)], E-mail: eberhard@ltm.uni-erlangen.de; Dignath, Florian [University of Stuttgart, Institute B of Mechanics (Germany)], E-mail: fd@mechb.uni-stuttgart.de; Kuebler, Lars [University of Erlangen-Nuremberg, Institute of Applied Mechanics (Germany)], E-mail: kuebler@ltm.uni-erlangen.de

    2003-03-15

    The optimization of multibody systems usually requires many costly criteria computations since the equations of motion must be evaluated by numerical time integration for each considered design. For actively controlled or flexible multibody systems additional difficulties arise as the criteria may contain non-differentiable points or many local minima. Therefore, in this paper a stochastic evolution strategy is used in combination with parallel computing in order to reduce the computation times whilst keeping the inherent robustness. For the parallelization a master-slave approach is used in a heterogeneous workstation/PC cluster. The pool-of-tasks concept is applied in order to deal with the frequently changing workloads of different machines in the cluster. In order to analyze the performance of the parallel optimization method, the suspension of an ICE passenger coach, modeled as an elastic multibody system, is optimized simultaneously with regard to several criteria including vibration damping and a criterion related to safety against derailment. The iterative and interactive nature of a typical optimization process for technical systems is emphasized.

  7. Parallel Evolutionary Optimization of Multibody Systems with Application to Railway Dynamics

    International Nuclear Information System (INIS)

    Eberhard, Peter; Dignath, Florian; Kuebler, Lars

    2003-01-01

    The optimization of multibody systems usually requires many costly criteria computations since the equations of motion must be evaluated by numerical time integration for each considered design. For actively controlled or flexible multibody systems additional difficulties arise as the criteria may contain non-differentiable points or many local minima. Therefore, in this paper a stochastic evolution strategy is used in combination with parallel computing in order to reduce the computation times whilst keeping the inherent robustness. For the parallelization a master-slave approach is used in a heterogeneous workstation/PC cluster. The pool-of-tasks concept is applied in order to deal with the frequently changing workloads of different machines in the cluster. In order to analyze the performance of the parallel optimization method, the suspension of an ICE passenger coach, modeled as an elastic multibody system, is optimized simultaneously with regard to several criteria including vibration damping and a criterion related to safety against derailment. The iterative and interactive nature of a typical optimization process for technical systems is emphasized

  8. Advanced parallel strategy for strongly coupled fast transient fluid-structure dynamics with dual management of kinematic constraints

    International Nuclear Information System (INIS)

    Faucher, Vincent

    2014-01-01

    Simulating fast transient phenomena involving fluids and structures in interaction for safety purposes requires both accurate and robust algorithms, and parallel computing to reduce the calculation time for industrial models. Managing kinematic constraints linking fluid and structural entities is thus a key issue and this contribution promotes a dual approach over the classical penalty approach, introducing arbitrary coefficients in the solution. This choice however severely increases the complexity of the problem, mainly due to non-permanent kinematic constraints. An innovative parallel strategy is therefore described, whose performances are demonstrated on significant examples exhibiting the full complexity of the target industrial simulations. (authors)

  9. Parallel evolution of tetrodotoxin resistance in three voltage-gated sodium channel genes in the garter snake Thamnophis sirtalis.

    Science.gov (United States)

    McGlothlin, Joel W; Chuckalovcak, John P; Janes, Daniel E; Edwards, Scott V; Feldman, Chris R; Brodie, Edmund D; Pfrender, Michael E; Brodie, Edmund D

    2014-11-01

    Members of a gene family expressed in a single species often experience common selection pressures. Consequently, the molecular basis of complex adaptations may be expected to involve parallel evolutionary changes in multiple paralogs. Here, we use bacterial artificial chromosome library scans to investigate the evolution of the voltage-gated sodium channel (Nav) family in the garter snake Thamnophis sirtalis, a predator of highly toxic Taricha newts. Newts possess tetrodotoxin (TTX), which blocks Nav's, arresting action potentials in nerves and muscle. Some Thamnophis populations have evolved resistance to extremely high levels of TTX. Previous work has identified amino acid sites in the skeletal muscle sodium channel Nav1.4 that confer resistance to TTX and vary across populations. We identify parallel evolution of TTX resistance in two additional Nav paralogs, Nav1.6 and 1.7, which are known to be expressed in the peripheral nervous system and should thus be exposed to ingested TTX. Each paralog contains at least one TTX-resistant substitution identical to a substitution previously identified in Nav1.4. These sites are fixed across populations, suggesting that the resistant peripheral nerves antedate resistant muscle. In contrast, three sodium channels expressed solely in the central nervous system (Nav1.1-1.3) showed no evidence of TTX resistance, consistent with protection from toxins by the blood-brain barrier. We also report the exon-intron structure of six Nav paralogs, the first such analysis for snake genes. Our results demonstrate that the molecular basis of adaptation may be both repeatable across members of a gene family and predictable based on functional considerations. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  10. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    Science.gov (United States)

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  11. Torque Split Strategy for Parallel Hybrid Electric Vehicles with an Integrated Starter Generator

    Directory of Open Access Journals (Sweden)

    Zhumu Fu

    2014-01-01

    Full Text Available This paper presents a torque split strategy for parallel hybrid electric vehicles with an integrated starter generator (ISG-PHEV by using fuzzy logic control. By combining the efficiency map and the optimum torque curve of the internal combustion engine (ICE with the state of charge (SOC of the batteries, the torque split strategy is designed, which manages the ICE within its peak efficiency region. Taking the quantified ICE torque, the quantified SOC of the batteries, and the quantified ICE speed as inputs, and regarding the output torque demanded on the ICE as an output, a fuzzy logic controller (FLC with relevant fuzzy rules has been developed to determine the optimal torque distribution among the ICE, the ISG, and the electric motor/generator (EMG effectively. The simulation results reveal that, compared with the conventional torque control strategy which uses rule-based controller (RBC in different driving cycles, the proposed FLC improves the fuel economy of the ISG-PHEV, increases the efficiency of the ICE, and maintains batteries SOC within its operation range more availably.

  12. Availability of public goods shapes the evolution of competing metabolic strategies.

    Science.gov (United States)

    Bachmann, Herwig; Fischlechner, Martin; Rabbers, Iraes; Barfa, Nakul; Branco dos Santos, Filipe; Molenaar, Douwe; Teusink, Bas

    2013-08-27

    Tradeoffs provide a rationale for the outcome of natural selection. A prominent example is the negative correlation between the growth rate and the biomass yield in unicellular organisms. This tradeoff leads to a dilemma, where the optimization of growth rate is advantageous for an individual, whereas the optimization of the biomass yield would be advantageous for a population. High-rate strategies are observed in a broad variety of organisms such as Escherichia coli, yeast, and cancer cells. Growth in suspension cultures favors fast-growing organisms, whereas spatial structure is of importance for the evolution of high-yield strategies. Despite this realization, experimental methods to directly select for increased yield are lacking. We here show that the serial propagation of a microbial population in a water-in-oil emulsion allows selection of strains with increased biomass yield. The propagation in emulsion creates a spatially structured environment where the growth-limiting substrate is privatized for populations founded by individual cells. Experimental evolution of several isogenic Lactococcus lactis strains demonstrated the existence of a tradeoff between growth rate and biomass yield as an apparent Pareto front. The underlying mutations altered glucose transport and led to major shifts between homofermentative and heterofermentative metabolism, accounting for the changes in metabolic efficiency. The results demonstrated the impact of privatizing a public good on the evolutionary outcome between competing metabolic strategies. The presented approach allows the investigation of fundamental questions in biology such as the evolution of cooperation, cell-cell interactions, and the relationships between environmental and metabolic constraints.

  13. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  14. Evolution strategies and multi-objective optimization of permanent magnet motor

    DEFF Research Database (Denmark)

    Andersen, Søren Bøgh; Santos, Ilmar

    2012-01-01

    When designing a permanent magnet motor, several geometry and material parameters are to be defined. This is not an easy task, as material properties and magnetic fields are highly non-linear and the design of a motor is therefore often an iterative process. From an engineering point of view, we...... of evolution strategies, ES to effectively design and optimize parameters of permanent magnet motors. Single as well as multi-objective optimization procedures are carried out. A modified way of creating the strategy parameters for the ES algorithm is also proposed and has together with the standard ES...

  15. Convergent Evolution of Hemoglobin Function in High-Altitude Andean Waterfowl Involves Limited Parallelism at the Molecular Sequence Level.

    Directory of Open Access Journals (Sweden)

    Chandrasekhar Natarajan

    2015-12-01

    Full Text Available A fundamental question in evolutionary genetics concerns the extent to which adaptive phenotypic convergence is attributable to convergent or parallel changes at the molecular sequence level. Here we report a comparative analysis of hemoglobin (Hb function in eight phylogenetically replicated pairs of high- and low-altitude waterfowl taxa to test for convergence in the oxygenation properties of Hb, and to assess the extent to which convergence in biochemical phenotype is attributable to repeated amino acid replacements. Functional experiments on native Hb variants and protein engineering experiments based on site-directed mutagenesis revealed the phenotypic effects of specific amino acid replacements that were responsible for convergent increases in Hb-O2 affinity in multiple high-altitude taxa. In six of the eight taxon pairs, high-altitude taxa evolved derived increases in Hb-O2 affinity that were caused by a combination of unique replacements, parallel replacements (involving identical-by-state variants with independent mutational origins in different lineages, and collateral replacements (involving shared, identical-by-descent variants derived via introgressive hybridization. In genome scans of nucleotide differentiation involving high- and low-altitude populations of three separate species, function-altering amino acid polymorphisms in the globin genes emerged as highly significant outliers, providing independent evidence for adaptive divergence in Hb function. The experimental results demonstrate that convergent changes in protein function can occur through multiple historical paths, and can involve multiple possible mutations. Most cases of convergence in Hb function did not involve parallel substitutions and most parallel substitutions did not affect Hb-O2 affinity, indicating that the repeatability of phenotypic evolution does not require parallelism at the molecular level.

  16. A massively parallel strategy for STR marker development, capture, and genotyping.

    Science.gov (United States)

    Kistler, Logan; Johnson, Stephen M; Irwin, Mitchell T; Louis, Edward E; Ratan, Aakrosh; Perry, George H

    2017-09-06

    Short tandem repeat (STR) variants are highly polymorphic markers that facilitate powerful population genetic analyses. STRs are especially valuable in conservation and ecological genetic research, yielding detailed information on population structure and short-term demographic fluctuations. Massively parallel sequencing has not previously been leveraged for scalable, efficient STR recovery. Here, we present a pipeline for developing STR markers directly from high-throughput shotgun sequencing data without a reference genome, and an approach for highly parallel target STR recovery. We employed our approach to capture a panel of 5000 STRs from a test group of diademed sifakas (Propithecus diadema, n = 3), endangered Malagasy rainforest lemurs, and we report extremely efficient recovery of targeted loci-97.3-99.6% of STRs characterized with ≥10x non-redundant sequence coverage. We then tested our STR capture strategy on P. diadema fecal DNA, and report robust initial results and suggestions for future implementations. In addition to STR targets, this approach also generates large, genome-wide single nucleotide polymorphism (SNP) panels from flanking regions. Our method provides a cost-effective and scalable solution for rapid recovery of large STR and SNP datasets in any species without needing a reference genome, and can be used even with suboptimal DNA more easily acquired in conservation and ecological studies. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.

  17. Parallel evolution under chemotherapy pressure in 29 breast cancer cell lines results in dissimilar mechanisms of resistance.

    Directory of Open Access Journals (Sweden)

    Bálint Tegze

    Full Text Available BACKGROUND: Developing chemotherapy resistant cell lines can help to identify markers of resistance. Instead of using a panel of highly heterogeneous cell lines, we assumed that truly robust and convergent pattern of resistance can be identified in multiple parallel engineered derivatives of only a few parental cell lines. METHODS: Parallel cell populations were initiated for two breast cancer cell lines (MDA-MB-231 and MCF-7 and these were treated independently for 18 months with doxorubicin or paclitaxel. IC50 values against 4 chemotherapy agents were determined to measure cross-resistance. Chromosomal instability and karyotypic changes were determined by cytogenetics. TaqMan RT-PCR measurements were performed for resistance-candidate genes. Pgp activity was measured by FACS. RESULTS: All together 16 doxorubicin- and 13 paclitaxel-treated cell lines were developed showing 2-46 fold and 3-28 fold increase in resistance, respectively. The RT-PCR and FACS analyses confirmed changes in tubulin isofom composition, TOP2A and MVP expression and activity of transport pumps (ABCB1, ABCG2. Cytogenetics showed less chromosomes but more structural aberrations in the resistant cells. CONCLUSION: We surpassed previous studies by parallel developing a massive number of cell lines to investigate chemoresistance. While the heterogeneity caused evolution of multiple resistant clones with different resistance characteristics, the activation of only a few mechanisms were sufficient in one cell line to achieve resistance.

  18. The evolution of intellectual property strategy in innovation ecosystems

    DEFF Research Database (Denmark)

    Holgersson, Marcus; Granstrand, Ove; Bogers, Marcel

    2017-01-01

    In this article, we attempt to extend and nuance the debate on intellectual property (IP) strategy, appropriation, and open innovation in dynamic and systemic innovation contexts. We present the case of four generations of mobile telecommunications systems (covering the period 1980-2015), and des......In this article, we attempt to extend and nuance the debate on intellectual property (IP) strategy, appropriation, and open innovation in dynamic and systemic innovation contexts. We present the case of four generations of mobile telecommunications systems (covering the period 1980......-2015), and describe and analyze the co-evolution of strategic IP management and innovation ecosystems. Throughout this development, technologies and technological relationships were governed with different and shifting degrees of formality. Simultaneously, firms differentiated technology accessibility across actors...

  19. New strategy for eliminating zero-sequence circulating current between parallel operating three-level NPC voltage source inverters

    DEFF Research Database (Denmark)

    Li, Kai; Dong, Zhenhua; Wang, Xiaodong

    2018-01-01

    A novel strategy based on a zero common mode voltage pulse-width modulation (ZCMV-PWM) technique and zero-sequence circulating current (ZSCC) feedback control is proposed in this study to eliminate ZSCCs between three-level neutral point clamped (NPC) voltage source inverters, with common AC and DC......, the ZCMV-PWM method is presented to reduce CMVs, and a simple electric circuit is adopted to control ZSCCs and neutral point potential. Finally, simulation and experiment are conducted to illustrate effectiveness of the proposed strategy. Results show that ZSCCs between paralleled inverters can...

  20. Evolution strategy based optimal chiller loading for saving energy

    International Nuclear Information System (INIS)

    Chang, Y.-C.; Lee, C.-Y.; Chen, C.-R.; Chou, C.-J.; Chen, W.-H.; Chen, W.-H.

    2009-01-01

    This study employs evolution strategy (ES) to solve optimal chiller loading (OCL) problem. ES overcomes the flaw that Lagrangian method is not adaptable for solving OCL as the power consumption models or the kW-PLR (partial load ratio) curves include convex functions and concave functions simultaneously. The complicated process of evolution by the genetic algorithm (GA) method for solving OCL can also be simplified by the ES method. This study uses the PLR of chiller as the variable to be solved for the decoupled air conditioning system. After analysis and comparison of the case study, it has been concluded that this method not only solves the problems of Lagrangian method and GA method, but also produces results with high accuracy within a rapid timeframe. It can be perfectly applied to the operation of air conditioning systems

  1. Molecular and morphological systematics of the Ellisellidae (Coelenterata: Octocorallia): Parallel evolution in a globally distributed family of octocorals

    KAUST Repository

    Bilewitch, Jaret P.

    2014-04-01

    The octocorals of the Ellisellidae constitute a diverse and widely distributed family with subdivisions into genera based on colonial growth forms. Branching patterns are repeated in several genera and congeners often display region-specific variations in a given growth form. We examined the systematic patterns of ellisellid genera and the evolution of branching form diversity using molecular phylogenetic and ancestral morphological reconstructions. Six of eight included genera were found to be polyphyletic due to biogeographical incompatibility with current taxonomic assignments and the creation of at least six new genera plus several reassignments among existing genera is necessary. Phylogenetic patterns of diversification of colony branching morphology displayed a similar transformation order in each of the two primary ellisellid clades, with a sea fan form estimated as the most-probable common ancestor with likely origins in the Indo-Pacific region. The observed parallelism in evolution indicates the existence of a constraint on the genetic elements determining ellisellid colonial morphology. However, the lack of correspondence between levels of genetic divergence and morphological diversity among genera suggests that future octocoral studies should focus on the role of changes in gene regulation in the evolution of branching patterns. © 2014 Elsevier Inc.

  2. Molecular and morphological systematics of the Ellisellidae (Coelenterata: Octocorallia): Parallel evolution in a globally distributed family of octocorals

    KAUST Repository

    Bilewitch, Jaret P.; Ekins, Merrick; Hooper, John; Degnan, Sandie M.

    2014-01-01

    The octocorals of the Ellisellidae constitute a diverse and widely distributed family with subdivisions into genera based on colonial growth forms. Branching patterns are repeated in several genera and congeners often display region-specific variations in a given growth form. We examined the systematic patterns of ellisellid genera and the evolution of branching form diversity using molecular phylogenetic and ancestral morphological reconstructions. Six of eight included genera were found to be polyphyletic due to biogeographical incompatibility with current taxonomic assignments and the creation of at least six new genera plus several reassignments among existing genera is necessary. Phylogenetic patterns of diversification of colony branching morphology displayed a similar transformation order in each of the two primary ellisellid clades, with a sea fan form estimated as the most-probable common ancestor with likely origins in the Indo-Pacific region. The observed parallelism in evolution indicates the existence of a constraint on the genetic elements determining ellisellid colonial morphology. However, the lack of correspondence between levels of genetic divergence and morphological diversity among genera suggests that future octocoral studies should focus on the role of changes in gene regulation in the evolution of branching patterns. © 2014 Elsevier Inc.

  3. Evolution of learned strategy choice in a frequency-dependent game.

    Science.gov (United States)

    Katsnelson, Edith; Motro, Uzi; Feldman, Marcus W; Lotem, Arnon

    2012-03-22

    In frequency-dependent games, strategy choice may be innate or learned. While experimental evidence in the producer-scrounger game suggests that learned strategy choice may be common, a recent theoretical analysis demonstrated that learning by only some individuals prevents learning from evolving in others. Here, however, we model learning explicitly, and demonstrate that learning can easily evolve in the whole population. We used an agent-based evolutionary simulation of the producer-scrounger game to test the success of two general learning rules for strategy choice. We found that learning was eventually acquired by all individuals under a sufficient degree of environmental fluctuation, and when players were phenotypically asymmetric. In the absence of sufficient environmental change or phenotypic asymmetries, the correct target for learning seems to be confounded by game dynamics, and innate strategy choice is likely to be fixed in the population. The results demonstrate that under biologically plausible conditions, learning can easily evolve in the whole population and that phenotypic asymmetry is important for the evolution of learned strategy choice, especially in a stable or mildly changing environment.

  4. Evolution of Strategies for "Prisoner's Dilemma" using Genetic Algorithm

    OpenAIRE

    Heinz, Jan

    2010-01-01

    The subject of this thesis is the software application "Prisoner's Dilemma". The program creates a population of players of "Prisoner's Dilemma", has them play against each other, and - based on their results - performs an evolution of their strategies by means of a genetic algorithm (selection, mutation, and crossover). The program was written in Microsoft Visual Studio, in the C++ programming language, and its interface makes use of the .NET Framework. The thesis includes examples of strate...

  5. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  6. Evolution Strategies in the Multipoint Connections Routing

    Directory of Open Access Journals (Sweden)

    L. Krulikovska

    2010-09-01

    Full Text Available Routing of multipoint connections plays an important role in final cost and quality of a found connection. New algorithms with better results are still searched. In this paper, a possibility of using the evolution strategies (ES for routing is presented. Quality of found connection is evaluated from the view of final cost and time spent on a searching procedure. First, parametrical analysis of results of the ES are discussed and compared with the Prim’s algorithm, which was chosen as a representative of the deterministic routing algorithms. Second, ways for improving the ES are suggested and implemented. The obtained results are reviewed. The main improvements are specified and discussed in conclusion.

  7. Argentina's experience with parallel exchange markets: 1981-1990

    OpenAIRE

    Steven B. Kamin

    1991-01-01

    This paper surveys the development and operation of the parallel exchange market in Argentina during the 1980s, and evaluates its impact upon macroeconomic performance and policy. The historical evolution of Argentina's exchange market policies is reviewed in order to understand the government's motives for imposing exchange controls. The parallel exchange market engendered by these controls is then analyzed, and econometric methods are used to evaluate the behavior of the parallel exchange r...

  8. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    International Nuclear Information System (INIS)

    Lu Liuyan; Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.

    2009-01-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f m pi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  9. Parallel evolution of a type IV secretion system in radiating lineages of the host-restricted bacterial pathogen Bartonella.

    Science.gov (United States)

    Engel, Philipp; Salzburger, Walter; Liesch, Marius; Chang, Chao-Chin; Maruyama, Soichi; Lanz, Christa; Calteau, Alexandra; Lajus, Aurélie; Médigue, Claudine; Schuster, Stephan C; Dehio, Christoph

    2011-02-10

    Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS), and thereby translocated Bartonella effector proteins (Beps), evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial pathogens

  10. Static and dynamic load-balancing strategies for parallel reservoir simulation

    International Nuclear Information System (INIS)

    Anguille, L.; Killough, J.E.; Li, T.M.C.; Toepfer, J.L.

    1995-01-01

    Accurate simulation of the complex phenomena that occur in flow in porous media can tax even the most powerful serial computers. Emergence of new parallel computer architectures as a future efficient tool in reservoir simulation may overcome this difficulty. Unfortunately, major problems remain to be solved before using parallel computers commercially: production serial programs must be rewritten to be efficient in parallel environments and load balancing methods must be explored to evenly distribute the workload on each processor during the simulation. This study implements both a static load-balancing algorithm and a receiver-initiated dynamic load-sharing algorithm to achieve high parallel efficiencies on both the IBM SP2 and Intel IPSC/860 parallel computers. Significant speedup improvement was recorded for both methods. Further optimization of these algorithms yielded a technique with efficiencies as high as 90% and 70% on 8 and 32 nodes, respectively. The increased performance was the result of the minimization of message-passing overhead

  11. Parallel computing in plasma physics: Nonlinear instabilities

    International Nuclear Information System (INIS)

    Pohn, E.; Kamelander, G.; Shoucri, M.

    2000-01-01

    A Vlasov-Poisson-system is used for studying the time evolution of the charge-separation at a spatial one- as well as a two-dimensional plasma-edge. Ions are advanced in time using the Vlasov-equation. The whole three-dimensional velocity-space is considered leading to very time-consuming four-resp. five-dimensional fully kinetic simulations. In the 1D simulations electrons are assumed to behave adiabatic, i.e. they are Boltzmann-distributed, leading to a nonlinear Poisson-equation. In the 2D simulations a gyro-kinetic approximation is used for the electrons. The plasma is assumed to be initially neutral. The simulations are performed at an equidistant grid. A constant time-step is used for advancing the density-distribution function in time. The time-evolution of the distribution function is performed using a splitting scheme. Each dimension (x, y, υ x , υ y , υ z ) of the phase-space is advanced in time separately. The value of the distribution function for the next time is calculated from the value of an - in general - interstitial point at the present time (fractional shift). One-dimensional cubic-spline interpolation is used for calculating the interstitial function values. After the fractional shifts are performed for each dimension of the phase-space, a whole time-step for advancing the distribution function is finished. Afterwards the charge density is calculated, the Poisson-equation is solved and the electric field is calculated before the next time-step is performed. The fractional shift method sketched above was parallelized for p processors as follows. Considering first the shifts in y-direction, a proper parallelization strategy is to split the grid into p disjoint υ z -slices, which are sub-grids, each containing a different 1/p-th part of the υ z range but the whole range of all other dimensions. Each processor is responsible for performing the y-shifts on a different slice, which can be done in parallel without any communication between

  12. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  13. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  14. Studies of parallel algorithms for the solution of a Fokker-Planck equation

    International Nuclear Information System (INIS)

    Deck, D.; Samba, G.

    1995-11-01

    The study of laser-created plasmas often requires the use of a kinetic model rather than a hydrodynamic one. This model change occurs, for example, in the hot spot formation in an ICF experiment or during the relaxation of colliding plasmas. When the gradients scalelengths or the size of a given system are not small compared to the characteristic mean-free-path, we have to deal with non-equilibrium situations, which can be described by the distribution functions of every species in the system. We present here a numerical method in plane or spherical 1-D geometry, for the solution of a Fokker-Planck equation that describes the evolution of stich functions in the phase space. The size and the time scale of kinetic simulations require the use of Massively Parallel Computers (MPP). We have adopted a message-passing strategy using Parallel Virtual Machine (PVM)

  15. Curious parallels and curious connections--phylogenetic thinking in biology and historical linguistics.

    Science.gov (United States)

    Atkinson, Quentin D; Gray, Russell D

    2005-08-01

    In The Descent of Man (1871), Darwin observed "curious parallels" between the processes of biological and linguistic evolution. These parallels mean that evolutionary biologists and historical linguists seek answers to similar questions and face similar problems. As a result, the theory and methodology of the two disciplines have evolved in remarkably similar ways. In addition to Darwin's curious parallels of process, there are a number of equally curious parallels and connections between the development of methods in biology and historical linguistics. Here we briefly review the parallels between biological and linguistic evolution and contrast the historical development of phylogenetic methods in the two disciplines. We then look at a number of recent studies that have applied phylogenetic methods to language data and outline some current problems shared by the two fields.

  16. Increased performance in the short-term water demand forecasting through the use of a parallel adaptive weighting strategy

    Science.gov (United States)

    Sardinha-Lourenço, A.; Andrade-Campos, A.; Antunes, A.; Oliveira, M. S.

    2018-03-01

    Recent research on water demand short-term forecasting has shown that models using univariate time series based on historical data are useful and can be combined with other prediction methods to reduce errors. The behavior of water demands in drinking water distribution networks focuses on their repetitive nature and, under meteorological conditions and similar consumers, allows the development of a heuristic forecast model that, in turn, combined with other autoregressive models, can provide reliable forecasts. In this study, a parallel adaptive weighting strategy of water consumption forecast for the next 24-48 h, using univariate time series of potable water consumption, is proposed. Two Portuguese potable water distribution networks are used as case studies where the only input data are the consumption of water and the national calendar. For the development of the strategy, the Autoregressive Integrated Moving Average (ARIMA) method and a short-term forecast heuristic algorithm are used. Simulations with the model showed that, when using a parallel adaptive weighting strategy, the prediction error can be reduced by 15.96% and the average error by 9.20%. This reduction is important in the control and management of water supply systems. The proposed methodology can be extended to other forecast methods, especially when it comes to the availability of multiple forecast models.

  17. Parallel programming with Easy Java Simulations

    Science.gov (United States)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  18. Parallel and convergent evolution of the dim-light vision gene RH1 in bats (Order: Chiroptera).

    Science.gov (United States)

    Shen, Yong-Yi; Liu, Jie; Irwin, David M; Zhang, Ya-Ping

    2010-01-21

    Rhodopsin, encoded by the gene Rhodopsin (RH1), is extremely sensitive to light, and is responsible for dim-light vision. Bats are nocturnal mammals that inhabit poor light environments. Megabats (Old-World fruit bats) generally have well-developed eyes, while microbats (insectivorous bats) have developed echolocation and in general their eyes were degraded, however, dramatic differences in the eyes, and their reliance on vision, exist in this group. In this study, we examined the rod opsin gene (RH1), and compared its evolution to that of two cone opsin genes (SWS1 and M/LWS). While phylogenetic reconstruction with the cone opsin genes SWS1 and M/LWS generated a species tree in accord with expectations, the RH1 gene tree united Pteropodidae (Old-World fruit bats) and Yangochiroptera, with very high bootstrap values, suggesting the possibility of convergent evolution. The hypothesis of convergent evolution was further supported when nonsynonymous sites or amino acid sequences were used to construct phylogenies. Reconstructed RH1 sequences at internal nodes of the bat species phylogeny showed that: (1) Old-World fruit bats share an amino acid change (S270G) with the tomb bat; (2) Miniopterus share two amino acid changes (V104I, M183L) with Rhinolophoidea; (3) the amino acid replacement I123V occurred independently on four branches, and the replacements L99M, L266V and I286V occurred each on two branches. The multiple parallel amino acid replacements that occurred in the evolution of bat RH1 suggest the possibility of multiple convergences of their ecological specialization (i.e., various photic environments) during adaptation for the nocturnal lifestyle, and suggest that further attention is needed on the study of the ecology and behavior of bats.

  19. Optimization approaches to mpi and area merging-based parallel buffer algorithm

    Directory of Open Access Journals (Sweden)

    Junfu Fan

    Full Text Available On buffer zone construction, the rasterization-based dilation method inevitably introduces errors, and the double-sided parallel line method involves a series of complex operations. In this paper, we proposed a parallel buffer algorithm based on area merging and MPI (Message Passing Interface to improve the performances of buffer analyses on processing large datasets. Experimental results reveal that there are three major performance bottlenecks which significantly impact the serial and parallel buffer construction efficiencies, including the area merging strategy, the task load balance method and the MPI inter-process results merging strategy. Corresponding optimization approaches involving tree-like area merging strategy, the vertex number oriented parallel task partition method and the inter-process results merging strategy were suggested to overcome these bottlenecks. Experiments were carried out to examine the performance efficiency of the optimized parallel algorithm. The estimation results suggested that the optimization approaches could provide high performance and processing ability for buffer construction in a cluster parallel environment. Our method could provide insights into the parallelization of spatial analysis algorithm.

  20. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  1. Comparison of some parallelization strategies of thermalhydraulic codes on GPUs

    International Nuclear Information System (INIS)

    Jendoubi, T.; Bergeaud, V.; Geay, A.

    2013-01-01

    Modern supercomputers architecture is now often based on hybrid concepts combining parallelism to distributed memory, parallelism to shared memory and also to GPUs (Graphic Process Units). In this work, we propose a new approach to take advantage of these graphic cards in thermohydraulics algorithms. (authors)

  2. Evolution of learning strategies in temporally and spatially variable environments: a review of theory.

    Science.gov (United States)

    Aoki, Kenichi; Feldman, Marcus W

    2014-02-01

    The theoretical literature from 1985 to the present on the evolution of learning strategies in variable environments is reviewed, with the focus on deterministic dynamical models that are amenable to local stability analysis, and on deterministic models yielding evolutionarily stable strategies. Individual learning, unbiased and biased social learning, mixed learning, and learning schedules are considered. A rapidly changing environment or frequent migration in a spatially heterogeneous environment favors individual learning over unbiased social learning. However, results are not so straightforward in the context of learning schedules or when biases in social learning are introduced. The three major methods of modeling temporal environmental change--coevolutionary, two-timescale, and information decay--are compared and shown to sometimes yield contradictory results. The so-called Rogers' paradox is inherent in the two-timescale method as originally applied to the evolution of pure strategies, but is often eliminated when the other methods are used. Moreover, Rogers' paradox is not observed for the mixed learning strategies and learning schedules that we review. We believe that further theoretical work is necessary on learning schedules and biased social learning, based on models that are logically consistent and empirically pertinent. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Evolution of learning strategies in temporally and spatially variable environments: A review of theory

    Science.gov (United States)

    Aoki, Kenichi; Feldman, Marcus W.

    2013-01-01

    The theoretical literature from 1985 to the present on the evolution of learning strategies in variable environments is reviewed, with the focus on deterministic dynamical models that are amenable to local stability analysis, and on deterministic models yielding evolutionarily stable strategies. Individual learning, unbiased and biased social learning, mixed learning, and learning schedules are considered. A rapidly changing environment or frequent migration in a spatially heterogeneous environment favors individual learning over unbiased social learning. However, results are not so straightforward in the context of learning schedules or when biases in social learning are introduced. The three major methods of modeling temporal environmental change – coevolutionary, two-timescale, and information decay – are compared and shown to sometimes yield contradictory results. The so-called Rogers’ paradox is inherent in the two-timescale method as originally applied to the evolution of pure strategies, but is often eliminated when the other methods are used. Moreover, Rogers’ paradox is not observed for the mixed learning strategies and learning schedules that we review. We believe that further theoretical work is necessary on learning schedules and biased social learning, based on models that are logically consistent and empirically pertinent. PMID:24211681

  4. Reliability optimization of series-parallel systems with a choice of redundancy strategies using a genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tavakkoli-Moghaddam, R. [Department of Industrial Engineering, Faculty of Engineering, University of Tehran, P.O. Box 11365/4563, Tehran (Iran, Islamic Republic of); Department of Mechanical Engineering, The University of British Columbia, Vancouver (Canada)], E-mail: tavakoli@ut.ac.ir; Safari, J. [Department of Industrial Engineering, Science and Research Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of)], E-mail: jalalsafari@pideco.com; Sassani, F. [Department of Mechanical Engineering, The University of British Columbia, Vancouver (Canada)], E-mail: sassani@mech.ubc.ca

    2008-04-15

    This paper proposes a genetic algorithm (GA) for a redundancy allocation problem for the series-parallel system when the redundancy strategy can be chosen for individual subsystems. Majority of the solution methods for the general redundancy allocation problems assume that the redundancy strategy for each subsystem is predetermined and fixed. In general, active redundancy has received more attention in the past. However, in practice both active and cold-standby redundancies may be used within a particular system design and the choice of the redundancy strategy becomes an additional decision variable. Thus, the problem is to select the best redundancy strategy, component, and redundancy level for each subsystem in order to maximize the system reliability under system-level constraints. This belongs to the NP-hard class of problems. Due to its complexity, it is so difficult to optimally solve such a problem by using traditional optimization tools. It is demonstrated in this paper that GA is an efficient method for solving this type of problems. Finally, computational results for a typical scenario are presented and the robustness of the proposed algorithm is discussed.

  5. Reliability optimization of series-parallel systems with a choice of redundancy strategies using a genetic algorithm

    International Nuclear Information System (INIS)

    Tavakkoli-Moghaddam, R.; Safari, J.; Sassani, F.

    2008-01-01

    This paper proposes a genetic algorithm (GA) for a redundancy allocation problem for the series-parallel system when the redundancy strategy can be chosen for individual subsystems. Majority of the solution methods for the general redundancy allocation problems assume that the redundancy strategy for each subsystem is predetermined and fixed. In general, active redundancy has received more attention in the past. However, in practice both active and cold-standby redundancies may be used within a particular system design and the choice of the redundancy strategy becomes an additional decision variable. Thus, the problem is to select the best redundancy strategy, component, and redundancy level for each subsystem in order to maximize the system reliability under system-level constraints. This belongs to the NP-hard class of problems. Due to its complexity, it is so difficult to optimally solve such a problem by using traditional optimization tools. It is demonstrated in this paper that GA is an efficient method for solving this type of problems. Finally, computational results for a typical scenario are presented and the robustness of the proposed algorithm is discussed

  6. Parallel Evolution of a Type IV Secretion System in Radiating Lineages of the Host-Restricted Bacterial Pathogen Bartonella

    Science.gov (United States)

    Engel, Philipp; Salzburger, Walter; Liesch, Marius; Chang, Chao-Chin; Maruyama, Soichi; Lanz, Christa; Calteau, Alexandra; Lajus, Aurélie; Médigue, Claudine; Schuster, Stephan C.; Dehio, Christoph

    2011-01-01

    Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS), and thereby translocated Bartonella effector proteins (Beps), evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial pathogens

  7. Parallel evolution of a type IV secretion system in radiating lineages of the host-restricted bacterial pathogen Bartonella.

    Directory of Open Access Journals (Sweden)

    Philipp Engel

    2011-02-01

    Full Text Available Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS, and thereby translocated Bartonella effector proteins (Beps, evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial

  8. Identification of Novel Betaherpesviruses in Iberian Bats Reveals Parallel Evolution.

    Directory of Open Access Journals (Sweden)

    Francisco Pozo

    Full Text Available A thorough search for bat herpesviruses was carried out in oropharyngeal samples taken from most of the bat species present in the Iberian Peninsula from the Vespertilionidae, Miniopteridae, Molossidae and Rhinolophidae families, in addition to a colony of captive fruit bats from the Pteropodidae family. By using two degenerate consensus PCR methods targeting two conserved genes, distinct and previously unrecognized bat-hosted herpesviruses were identified for the most of the tested species. All together a total of 42 potentially novel bat herpesviruses were partially characterized. Thirty-two of them were tentatively assigned to the Betaherpesvirinae subfamily while the remaining 10 were allocated into the Gammaherpesvirinae subfamily. Significant diversity was observed among the novel sequences when compared with type herpesvirus species of the ICTV-approved genera. The inferred phylogenetic relationships showed that most of the betaherpesviruses sequences fell into a well-supported unique monophyletic clade and support the recognition of a new betaherpesvirus genus. This clade is subdivided into three major clades, corresponding to the families of bats studied. This supports the hypothesis of a species-specific parallel evolution process between the potentially new betaherpesviruses and their bat hosts. Interestingly, two of the betaherpesviruses' sequences detected in rhinolophid bats clustered together apart from the rest, closely related to viruses that belong to the Roseolovirus genus. This suggests a putative third roseolo lineage. On the contrary, no phylogenetic structure was detected among several potentially novel bat-hosted gammaherpesviruses found in the study. Remarkably, all of the possible novel bat herpesviruses described in this study are linked to a unique bat species.

  9. Identification of Novel Betaherpesviruses in Iberian Bats Reveals Parallel Evolution.

    Science.gov (United States)

    Pozo, Francisco; Juste, Javier; Vázquez-Morón, Sonia; Aznar-López, Carolina; Ibáñez, Carlos; Garin, Inazio; Aihartza, Joxerra; Casas, Inmaculada; Tenorio, Antonio; Echevarría, Juan Emilio

    2016-01-01

    A thorough search for bat herpesviruses was carried out in oropharyngeal samples taken from most of the bat species present in the Iberian Peninsula from the Vespertilionidae, Miniopteridae, Molossidae and Rhinolophidae families, in addition to a colony of captive fruit bats from the Pteropodidae family. By using two degenerate consensus PCR methods targeting two conserved genes, distinct and previously unrecognized bat-hosted herpesviruses were identified for the most of the tested species. All together a total of 42 potentially novel bat herpesviruses were partially characterized. Thirty-two of them were tentatively assigned to the Betaherpesvirinae subfamily while the remaining 10 were allocated into the Gammaherpesvirinae subfamily. Significant diversity was observed among the novel sequences when compared with type herpesvirus species of the ICTV-approved genera. The inferred phylogenetic relationships showed that most of the betaherpesviruses sequences fell into a well-supported unique monophyletic clade and support the recognition of a new betaherpesvirus genus. This clade is subdivided into three major clades, corresponding to the families of bats studied. This supports the hypothesis of a species-specific parallel evolution process between the potentially new betaherpesviruses and their bat hosts. Interestingly, two of the betaherpesviruses' sequences detected in rhinolophid bats clustered together apart from the rest, closely related to viruses that belong to the Roseolovirus genus. This suggests a putative third roseolo lineage. On the contrary, no phylogenetic structure was detected among several potentially novel bat-hosted gammaherpesviruses found in the study. Remarkably, all of the possible novel bat herpesviruses described in this study are linked to a unique bat species.

  10. Many-Objective Particle Swarm Optimization Using Two-Stage Strategy and Parallel Cell Coordinate System.

    Science.gov (United States)

    Hu, Wang; Yen, Gary G; Luo, Guangchun

    2017-06-01

    It is a daunting challenge to balance the convergence and diversity of an approximate Pareto front in a many-objective optimization evolutionary algorithm. A novel algorithm, named many-objective particle swarm optimization with the two-stage strategy and parallel cell coordinate system (PCCS), is proposed in this paper to improve the comprehensive performance in terms of the convergence and diversity. In the proposed two-stage strategy, the convergence and diversity are separately emphasized at different stages by a single-objective optimizer and a many-objective optimizer, respectively. A PCCS is exploited to manage the diversity, such as maintaining a diverse archive, identifying the dominance resistant solutions, and selecting the diversified solutions. In addition, a leader group is used for selecting the global best solutions to balance the exploitation and exploration of a population. The experimental results illustrate that the proposed algorithm outperforms six chosen state-of-the-art designs in terms of the inverted generational distance and hypervolume over the DTLZ test suite.

  11. The Voltage-Gated Potassium Channel Subfamily KQT Member 4 (KCNQ4) Displays Parallel Evolution in Echolocating Bats

    Science.gov (United States)

    Liu, Yang; Han, Naijian; Franchini, Lucía F.; Xu, Huihui; Pisciottano, Francisco; Elgoyhen, Ana Belén; Rajan, Koilmani Emmanuvel; Zhang, Shuyi

    2012-01-01

    Bats are the only mammals that use highly developed laryngeal echolocation, a sensory mechanism based on the ability to emit laryngeal sounds and interpret the returning echoes to identify objects. Although this capability allows bats to orientate and hunt in complete darkness, endowing them with great survival advantages, the genetic bases underlying the evolution of bat echolocation are still largely unknown. Echolocation requires high-frequency hearing that in mammals is largely dependent on somatic electromotility of outer hair cells. Then, understanding the molecular evolution of outer hair cell genes might help to unravel the evolutionary history of echolocation. In this work, we analyzed the molecular evolution of two key outer hair cell genes: the voltage-gated potassium channel gene KCNQ4 and CHRNA10, the gene encoding the α10 nicotinic acetylcholine receptor subunit. We reconstructed the phylogeny of bats based on KCNQ4 and CHRNA10 protein and nucleotide sequences. A phylogenetic tree built using KCNQ4 amino acid sequences showed that two paraphyletic clades of laryngeal echolocating bats grouped together, with eight shared substitutions among particular lineages. In addition, our analyses indicated that two of these parallel substitutions, M388I and P406S, were probably fixed under positive selection and could have had a strong functional impact on KCNQ4. Moreover, our results indicated that KCNQ4 evolved under positive selection in the ancestral lineage leading to mammals, suggesting that this gene might have been important for the evolution of mammalian hearing. On the other hand, we found that CHRNA10, a gene that evolved adaptively in the mammalian lineage, was under strong purifying selection in bats. Thus, the CHRNA10 amino acid tree did not show echolocating bat monophyly and reproduced the bat species tree. These results suggest that only a subset of hearing genes could underlie the evolution of echolocation. The present work continues to

  12. Defining the best parallelization strategy for a diphasic compressible fluid mechanics code

    International Nuclear Information System (INIS)

    Berthou, Jean-Yves; Fayolle, Eric; Faucher, Eric; Scliffet, Laurent

    2000-01-01

    parallelization strategy we recommend for codes comparable to ECOSS. (author)

  13. Defining the best parallelization strategy for a diphasic compressible fluid mechanics code

    Energy Technology Data Exchange (ETDEWEB)

    Berthou, Jean-Yves; Fayolle, Eric [Electricite de France, Research and Development division, Modeling and Information Technologies Department, CLAMART CEDEX (France); Faucher, Eric; Scliffet, Laurent [Electricite de France, Research and Development Division, Mechanics and Component Technology Branch Department, Moret sur Loing (France)

    2000-09-01

    parallelization strategy we recommend for codes comparable to ECOSS. (author)

  14. Parallel Conjugate Gradient: Effects of Ordering Strategies, Programming Paradigms, and Architectural Platforms

    Science.gov (United States)

    Oliker, Leonid; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. A sparse matrix-vector multiply (SPMV) usually accounts for most of the floating-point operations within a CG iteration. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and SPMV using different programming paradigms and architectures. Results show that for this class of applications, ordering significantly improves overall performance, that cache reuse may be more important than reducing communication, and that it is possible to achieve message passing performance using shared memory constructs through careful data ordering and distribution. However, a multi-threaded implementation of CG on the Tera MTA does not require special ordering or partitioning to obtain high efficiency and scalability.

  15. Application of evolution strategy algorithm for optimization of a single-layer sound absorber

    Directory of Open Access Journals (Sweden)

    Morteza Gholamipoor

    2014-12-01

    Full Text Available Depending on different design parameters and limitations, optimization of sound absorbers has always been a challenge in the field of acoustic engineering. Various methods of optimization have evolved in the past decades with innovative method of evolution strategy gaining more attention in the recent years. Based on their simplicity and straightforward mathematical representations, single-layer absorbers have been widely used in both engineering and industrial applications and an optimized design for these absorbers has become vital. In the present study, the method of evolution strategy algorithm is used for optimization of a single-layer absorber at both a particular frequency and an arbitrary frequency band. Results of the optimization have been compared against different methods of genetic algorithm and penalty functions which are proved to be favorable in both effectiveness and accuracy. Finally, a single-layer absorber is optimized in a desired range of frequencies that is the main goal of an industrial and engineering optimization process.

  16. APPLICATION OF RESTART COVARIANCE MATRIX ADAPTATION EVOLUTION STRATEGY (RCMA-ES TO GENERATION EXPANSION PLANNING PROBLEM

    Directory of Open Access Journals (Sweden)

    K. Karthikeyan

    2012-10-01

    Full Text Available This paper describes the application of an evolutionary algorithm, Restart Covariance Matrix Adaptation Evolution Strategy (RCMA-ES to the Generation Expansion Planning (GEP problem. RCMA-ES is a class of continuous Evolutionary Algorithm (EA derived from the concept of self-adaptation in evolution strategies, which adapts the covariance matrix of a multivariate normal search distribution. The original GEP problem is modified by incorporating Virtual Mapping Procedure (VMP. The GEP problem of a synthetic test systems for 6-year, 14-year and 24-year planning horizons having five types of candidate units is considered. Two different constraint-handling methods are incorporated and impact of each method has been compared. In addition, comparison and validation has also made with dynamic programming method.

  17. Evolution of morphological and climatic adaptations in Veronica L. (Plantaginaceae

    Directory of Open Access Journals (Sweden)

    Jian-Cheng Wang

    2016-08-01

    Full Text Available Perennials and annuals apply different strategies to adapt to the adverse environment, based on ‘tolerance’ and ‘avoidance’, respectively. To understand lifespan evolution and its impact on plant adaptability, we carried out a comparative study of perennials and annuals in the genus Veronica from a phylogenetic perspective. The results showed that ancestors of the genus Veronicawere likely to be perennial plants. Annual life history of Veronica has evolved multiple times and subtrees with more annual species have a higher substitution rate. Annuals can adapt to more xeric habitats than perennials. This indicates that annuals are more drought-resistant than their perennial relatives. Due to adaptation to similar selective pressures, parallel evolution occurs in morphological characters among annual species of Veronica.

  18. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    Science.gov (United States)

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  19. Aerodynamic Shape Optimization Using Hybridized Differential Evolution

    Science.gov (United States)

    Madavan, Nateri K.

    2003-01-01

    An aerodynamic shape optimization method that uses an evolutionary algorithm known at Differential Evolution (DE) in conjunction with various hybridization strategies is described. DE is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Various hybridization strategies for DE are explored, including the use of neural networks as well as traditional local search methods. A Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the hybrid DE optimizer. The method is implemented on distributed parallel computers so that new designs can be obtained within reasonable turnaround times. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. (The final paper will include at least one other aerodynamic design application). The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated.

  20. Increasing phylogenetic resolution at low taxonomic levels using massively parallel sequencing of chloroplast genomes

    Directory of Open Access Journals (Sweden)

    Cronn Richard

    2009-12-01

    Full Text Available Abstract Background Molecular evolutionary studies share the common goal of elucidating historical relationships, and the common challenge of adequately sampling taxa and characters. Particularly at low taxonomic levels, recent divergence, rapid radiations, and conservative genome evolution yield limited sequence variation, and dense taxon sampling is often desirable. Recent advances in massively parallel sequencing make it possible to rapidly obtain large amounts of sequence data, and multiplexing makes extensive sampling of megabase sequences feasible. Is it possible to efficiently apply massively parallel sequencing to increase phylogenetic resolution at low taxonomic levels? Results We reconstruct the infrageneric phylogeny of Pinus from 37 nearly-complete chloroplast genomes (average 109 kilobases each of an approximately 120 kilobase genome generated using multiplexed massively parallel sequencing. 30/33 ingroup nodes resolved with ≥ 95% bootstrap support; this is a substantial improvement relative to prior studies, and shows massively parallel sequencing-based strategies can produce sufficient high quality sequence to reach support levels originally proposed for the phylogenetic bootstrap. Resampling simulations show that at least the entire plastome is necessary to fully resolve Pinus, particularly in rapidly radiating clades. Meta-analysis of 99 published infrageneric phylogenies shows that whole plastome analysis should provide similar gains across a range of plant genera. A disproportionate amount of phylogenetic information resides in two loci (ycf1, ycf2, highlighting their unusual evolutionary properties. Conclusion Plastome sequencing is now an efficient option for increasing phylogenetic resolution at lower taxonomic levels in plant phylogenetic and population genetic analyses. With continuing improvements in sequencing capacity, the strategies herein should revolutionize efforts requiring dense taxon and character sampling

  1. Critical dynamics in the evolution of stochastic strategies for the iterated prisoner's dilemma.

    Directory of Open Access Journals (Sweden)

    Dimitris Iliopoulos

    2010-10-01

    Full Text Available The observed cooperation on the level of genes, cells, tissues, and individuals has been the object of intense study by evolutionary biologists, mainly because cooperation often flourishes in biological systems in apparent contradiction to the selfish goal of survival inherent in Darwinian evolution. In order to resolve this paradox, evolutionary game theory has focused on the Prisoner's Dilemma (PD, which incorporates the essence of this conflict. Here, we encode strategies for the iterated Prisoner's Dilemma (IPD in terms of conditional probabilities that represent the response of decision pathways given previous plays. We find that if these stochastic strategies are encoded as genes that undergo Darwinian evolution, the environmental conditions that the strategies are adapting to determine the fixed point of the evolutionary trajectory, which could be either cooperation or defection. A transition between cooperative and defective attractors occurs as a function of different parameters such as mutation rate, replacement rate, and memory, all of which affect a player's ability to predict an opponent's behavior. These results imply that in populations of players that can use previous decisions to plan future ones, cooperation depends critically on whether the players can rely on facing the same strategies that they have adapted to. Defection, on the other hand, is the optimal adaptive response in environments that change so quickly that the information gathered from previous plays cannot usefully be integrated for a response.

  2. Efficient receiver tuning using differential evolution strategies

    Science.gov (United States)

    Wheeler, Caleb H.; Toland, Trevor G.

    2016-08-01

    Differential evolution (DE) is a powerful and computationally inexpensive optimization strategy that can be used to search an entire parameter space or to converge quickly on a solution. The Kilopixel Array Pathfinder Project (KAPPa) is a heterodyne receiver system delivering 5 GHz of instantaneous bandwidth in the tuning range of 645-695 GHz. The fully automated KAPPa receiver test system finds optimal receiver tuning using performance feedback and DE. We present an adaptation of DE for use in rapid receiver characterization. The KAPPa DE algorithm is written in Python 2.7 and is fully integrated with the KAPPa instrument control, data processing, and visualization code. KAPPa develops the technologies needed to realize heterodyne focal plane arrays containing 1000 pixels. Finding optimal receiver tuning by investigating large parameter spaces is one of many challenges facing the characterization phase of KAPPa. This is a difficult task via by-hand techniques. Characterizing or tuning in an automated fashion without need for human intervention is desirable for future large scale arrays. While many optimization strategies exist, DE is ideal for time and performance constraints because it can be set to converge to a solution rapidly with minimal computational overhead. We discuss how DE is utilized in the KAPPa system and discuss its performance and look toward the future of 1000 pixel array receivers and consider how the KAPPa DE system might be applied.

  3. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    Science.gov (United States)

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  4. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    Directory of Open Access Journals (Sweden)

    Yu-Shuang Dong

    2014-01-01

    Full Text Available The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  5. Mixed-integer evolution strategies for parameter optimization and their applications to medical image analysis

    NARCIS (Netherlands)

    Li, Rui

    2009-01-01

    The target of this work is to extend the canonical Evolution Strategies (ES) from traditional real-valued parameter optimization domain to mixed-integer parameter optimization domain. This is necessary because there exist numerous practical optimization problems from industry in which the set of

  6. Research on Parallel Three Phase PWM Converters base on RTDS

    Science.gov (United States)

    Xia, Yan; Zou, Jianxiao; Li, Kai; Liu, Jingbo; Tian, Jun

    2018-01-01

    Converters parallel operation can increase capacity of the system, but it may lead to potential zero-sequence circulating current, so the control of circulating current was an important goal in the design of parallel inverters. In this paper, the Real Time Digital Simulator (RTDS) is used to model the converters parallel system in real time and study the circulating current restraining. The equivalent model of two parallel converters and zero-sequence circulating current(ZSCC) were established and analyzed, then a strategy using variable zero vector control was proposed to suppress the circulating current. For two parallel modular converters, hardware-in-the-loop(HIL) study based on RTDS and practical experiment were implemented, results prove that the proposed control strategy is feasible and effective.

  7. Maternal Lipid Provisioning Mirrors Evolution of Reproductive Strategies in Direct-Developing Whelks.

    Science.gov (United States)

    Carrasco, Sergio A; Phillips, Nicole E; Sewell, Mary A

    2016-06-01

    The energetic input that offspring receive from their mothers is a well-studied maternal effect that can influence the evolution of life histories. Using the offspring of three sympatric whelks: Cominella virgata (one embryo per capsule); Cominella maculosa (multiple embryos per capsule); and Haustrum scobina (multiple embryos per capsule and nurse-embryo consumption), we examined how contrasting reproductive strategies mediate inter- and intraspecific differences in hatchling provisioning. Total lipid content (as measured in μg hatchling(-1) ± SE) was unrelated to size among the 3 species; the hatchlings of H. scobina were the smallest but had the highest lipid content (33.8 ± 8.1 μg hatchling(-1)). In offspring of C. maculosa, lipid content was 6.6 ± 0.4 μg hatchling(-1), and in offspring of C. virgata, it was 21.7 ± 3.2 μg hatchling(-1) The multi-encapsulated hatchlings of C. maculosa and H. scobina were the only species that contained the energetic lipids, wax ester (WE) and methyl ester (ME). However, the overall composition of energetic lipid between hatchlings of the two Cominella species reflected strong affinities of taxonomy, suggesting a phylogenetic evolution of the non-adelphophagic development strategy. Inter- and intracapsular variability in sibling provisioning was highest in H. scobina, a finding that implies less control of allocation to individual hatchlings in this adelphophagic developer. We suggest that interspecific variability of lipids offers a useful approach to understanding the evolution of maternal provisioning in direct-developing species. © 2016 Marine Biological Laboratory.

  8. Evolution of Parallel Spindles Like genes in plants and highlight of unique domain architecture#

    Directory of Open Access Journals (Sweden)

    Consiglio Federica M

    2011-03-01

    Full Text Available Abstract Background Polyploidy has long been recognized as playing an important role in plant evolution. In flowering plants, the major route of polyploidization is suggested to be sexual through gametes with somatic chromosome number (2n. Parallel Spindle1 gene in Arabidopsis thaliana (AtPS1 was recently demonstrated to control spindle orientation in the 2nd division of meiosis and, when mutated, to induce 2n pollen. Interestingly, AtPS1 encodes a protein with a FHA domain and PINc domain putatively involved in RNA decay (i.e. Nonsense Mediated mRNA Decay. In potato, 2n pollen depending on parallel spindles was described long time ago but the responsible gene has never been isolated. The knowledge derived from AtPS1 as well as the availability of genome sequences makes it possible to isolate potato PSLike (PSL and to highlight the evolution of PSL family in plants. Results Our work leading to the first characterization of PSLs in potato showed a greater PSL complexity in this species respect to Arabidopsis thaliana. Indeed, a genomic PSL locus and seven cDNAs affected by alternative splicing have been cloned. In addition, the occurrence of at least two other PSL loci in potato was suggested by the sequence comparison of alternatively spliced transcripts. Phylogenetic analysis on 20 Viridaeplantae showed the wide distribution of PSLs throughout the species and the occurrence of multiple copies only in potato and soybean. The analysis of PSLFHA and PSLPINc domains evidenced that, in terms of secondary structure, a major degree of variability occurred in PINc domain respect to FHA. In terms of specific active sites, both domains showed diversification among plant species that could be related to a functional diversification among PSL genes. In addition, some specific active sites were strongly conserved among plants as supported by sequence alignment and by evidence of negative selection evaluated as difference between non-synonymous and

  9. Implementation of a parallel version of a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Gerstengarbe, F.W. [ed.; Kuecken, M. [Potsdam-Institut fuer Klimafolgenforschung (PIK), Potsdam (Germany); Schaettler, U. [Deutscher Wetterdienst, Offenbach am Main (Germany). Geschaeftsbereich Forschung und Entwicklung

    1997-10-01

    A regional climate model developed by the Max Planck Institute for Meterology and the German Climate Computing Centre in Hamburg based on the `Europa` and `Deutschland` models of the German Weather Service has been parallelized and implemented on the IBM RS/6000 SP computer system of the Potsdam Institute for Climate Impact Research including parallel input/output processing, the explicit Eulerian time-step, the semi-implicit corrections, the normal-mode initialization and the physical parameterizations of the German Weather Service. The implementation utilizes Fortran 90 and the Message Passing Interface. The parallelization strategy used is a 2D domain decomposition. This report describes the parallelization strategy, the parallel I/O organization, the influence of different domain decomposition approaches for static and dynamic load imbalances and first numerical results. (orig.)

  10. Parallel or convergent evolution in human population genomic data revealed by genotype networks.

    Science.gov (United States)

    R Vahdati, Ali; Wagner, Andreas

    2016-08-02

    Genotype networks are representations of genetic variation data that are complementary to phylogenetic trees. A genotype network is a graph whose nodes are genotypes (DNA sequences) with the same broadly defined phenotype. Two nodes are connected if they differ in some minimal way, e.g., in a single nucleotide. We analyze human genome variation data from the 1,000 genomes project, and construct haploid genotype (haplotype) networks for 12,235 protein coding genes. The structure of these networks varies widely among genes, indicating different patterns of variation despite a shared evolutionary history. We focus on those genes whose genotype networks show many cycles, which can indicate homoplasy, i.e., parallel or convergent evolution, on the sequence level. For 42 genes, the observed number of cycles is so large that it cannot be explained by either chance homoplasy or recombination. When analyzing possible explanations, we discovered evidence for positive selection in 21 of these genes and, in addition, a potential role for constrained variation and purifying selection. Balancing selection plays at most a small role. The 42 genes with excess cycles are enriched in functions related to immunity and response to pathogens. Genotype networks are representations of genetic variation data that can help understand unusual patterns of genomic variation.

  11. Engineering-Based Thermal CFD Simulations on Massive Parallel Systems

    KAUST Repository

    Frisch, Jé rô me; Mundani, Ralf-Peter; Rank, Ernst; van Treeck, Christoph

    2015-01-01

    The development of parallel Computational Fluid Dynamics (CFD) codes is a challenging task that entails efficient parallelization concepts and strategies in order to achieve good scalability values when running those codes on modern supercomputers

  12. The evolution of unconditional strategies via the 'multiplier effect'.

    Science.gov (United States)

    McNamara, John M; Dall, Sasha R X

    2011-03-01

    Ostensibly, it makes sense in a changeable world to condition behaviour and development on information when it is available. Nevertheless, unconditional behavioural and life history strategies are widespread. Here, we show how intergenerational effects can limit the evolutionary value of responding to reliable environmental cues, and thus favour the evolutionary persistence of otherwise paradoxical unconditional strategies. While cue-ignoring genotypes do poorly in the wrong environments, in the right environment they will leave many copies of themselves, which will themselves leave many copies, and so on, leading genotypes to accumulate in habitats in which they do well. We call this 'The Multiplier Effect'. We explore the consequences of the multiplier effect by focussing on the ecologically important phenomenon of natal philopatry. We model the environment as a large number of temporally varying breeding sites connected by natal dispersal between sites. Our aim is to identify which aspects of an environment promote the multiplier effect. We show, if sites remain connected through some background level of 'accidental' dispersal, unconditional natal philopatry can evolve even when there is density dependence (with its accompanying kin competition effects), and cues that are only mildly erroneous. Thus, the multiplier effect may underpin the evolution and maintenance of unconditional strategies such as natal philopatry in many biological systems. © 2011 Blackwell Publishing Ltd/CNRS.

  13. Parallel evolution of the glycogen synthase 1 (muscle) gene Gys1 between Old World and New World fruit bats (Order: Chiroptera).

    Science.gov (United States)

    Fang, Lu; Shen, Bin; Irwin, David M; Zhang, Shuyi

    2014-10-01

    Glycogen synthase, which catalyzes the synthesis of glycogen, is especially important for Old World (Pteropodidae) and New World (Phyllostomidae) fruit bats that ingest high-carbohydrate diets. Glycogen synthase 1, encoded by the Gys1 gene, is the glycogen synthase isozyme that functions in muscles. To determine whether Gys1 has undergone adaptive evolution in bats with carbohydrate-rich diets, in comparison to insect-eating sister bat taxa, we sequenced the coding region of the Gys1 gene from 10 species of bats, including two Old World fruit bats (Pteropodidae) and a New World fruit bat (Phyllostomidae). Our results show no evidence for positive selection in the Gys1 coding sequence on the ancestral Old World and the New World Artibeus lituratus branches. Tests for convergent evolution indicated convergence of the sequences and one parallel amino acid substitution (T395A) was detected on these branches, which was likely driven by natural selection.

  14. Intrasexual competition facilitates the evolution of alternative mating strategies in a colour polymorphic fish.

    Science.gov (United States)

    Hurtado-Gonzales, Jorge L; Uy, J Albert C

    2010-12-23

    Intense competition for access to females can lead to males exploiting different components of sexual selection, and result in the evolution of alternative mating strategies (AMSs). Males of Poecilia parae, a colour polymorphic fish, exhibit five distinct phenotypes: drab-coloured (immaculata), striped (parae), structural-coloured (blue) and carotenoid-based red and yellow morphs. Previous work indicates that immaculata males employ a sneaker strategy, whereas the red and yellow morphs exploit female preferences for carotenoid-based colours. Mating strategies favouring the maintenance of the other morphs remain to be determined. Here, we report the role of agonistic male-male interactions in influencing female mating preferences and male mating success, and in facilitating the evolution of AMSs. Our study reveals variation in aggressiveness among P. parae morphs during indirect and direct interactions with sexually receptive females. Two morphs, parae and yellow, use aggression to enhance their mating success (i.e., number of copulations) by 1) directly monopolizing access to females, and 2) modifying female preferences after winning agonistic encounters. Conversely, we found that the success of the drab-coloured immaculata morph, which specializes in a sneak copulation strategy, relies in its ability to circumvent both male aggression and female choice when facing all but yellow males. Strong directional selection is expected to deplete genetic variation, yet many species show striking genetically-based polymorphisms. Most studies evoke frequency dependent selection to explain the persistence of such variation. Consistent with a growing body of evidence, our findings suggest that a complex form of balancing selection may alternatively explain the evolution and maintenance of AMSs in a colour polymorphic fish. In particular, this study demonstrates that intrasexual competition results in phenotypically distinct males exhibiting clear differences in their levels of

  15. Intrasexual competition facilitates the evolution of alternative mating strategies in a colour polymorphic fish

    Directory of Open Access Journals (Sweden)

    Uy J Albert C

    2010-12-01

    Full Text Available Abstract Background Intense competition for access to females can lead to males exploiting different components of sexual selection, and result in the evolution of alternative mating strategies (AMSs. Males of Poecilia parae, a colour polymorphic fish, exhibit five distinct phenotypes: drab-coloured (immaculata, striped (parae, structural-coloured (blue and carotenoid-based red and yellow morphs. Previous work indicates that immaculata males employ a sneaker strategy, whereas the red and yellow morphs exploit female preferences for carotenoid-based colours. Mating strategies favouring the maintenance of the other morphs remain to be determined. Here, we report the role of agonistic male-male interactions in influencing female mating preferences and male mating success, and in facilitating the evolution of AMSs. Results Our study reveals variation in aggressiveness among P. parae morphs during indirect and direct interactions with sexually receptive females. Two morphs, parae and yellow, use aggression to enhance their mating success (i.e., number of copulations by 1 directly monopolizing access to females, and 2 modifying female preferences after winning agonistic encounters. Conversely, we found that the success of the drab-coloured immaculata morph, which specializes in a sneak copulation strategy, relies in its ability to circumvent both male aggression and female choice when facing all but yellow males. Conclusions Strong directional selection is expected to deplete genetic variation, yet many species show striking genetically-based polymorphisms. Most studies evoke frequency dependent selection to explain the persistence of such variation. Consistent with a growing body of evidence, our findings suggest that a complex form of balancing selection may alternatively explain the evolution and maintenance of AMSs in a colour polymorphic fish. In particular, this study demonstrates that intrasexual competition results in phenotypically distinct

  16. Synchronization Of Parallel Discrete Event Simulations

    Science.gov (United States)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  17. Optimal energy management for a series-parallel hybrid electric bus

    International Nuclear Information System (INIS)

    Xiong Weiwei; Zhang Yong; Yin Chengliang

    2009-01-01

    This paper aims to present a new type of series-parallel hybrid electric bus and its energy management strategy. This hybrid bus is a post-transmission coupled system employing a novel transmission as the series-parallel configuration switcher. In this paper, the vehicle architecture, transmission scheme and numerical models are presented. The energy management system governs the mode switching between the series mode and the parallel mode as well as the instantaneous power distribution. In this work, two separated controllers using fuzzy logic called Mode Decision and Parallel-driving Energy Management are employed to fulfill these two tasks. The energy management strategy and the applications of fuzzy logic are described. The strategy is validated by a forward-facing simulation program based on the software Matlab/Simulink. The results show that the energy management strategy is effective to control the engine operating in a high-efficiency region as well as to sustain the battery charge state while satisfy the drive ability. The energy consumption is theoretically reduced by 30.3% to that of the conventional bus under transit bus driving cycle. In addition, works need future study are also presented.

  18. Effect of migration based on strategy and cost on the evolution of cooperation

    International Nuclear Information System (INIS)

    Li, Yan; Ye, Hang

    2015-01-01

    Highlights: •Propose a migration based on strategy and cost in the Prisoner’s Dilemma Game. •The level of cooperation without mutation is higher than that with mutation. •Increased costs have no effect on the level of cooperation without mutation. •The level of cooperation decreases with the increase in cost with mutation. •An optimal density value ρ resulting in the maximum level of cooperation exists. -- Abstract: Humans consider not only their own ability but also the environment around them during the process of migration. Based on this fact, we introduce migration based on strategy and cost into the Spatial Prisoner’s Dilemma Game on a two-dimensional grid. The migration means that agents cannot move when all of the neighbors are cooperators; otherwise, agents move with a probability related to payoff and cost. The result obtained by the computer simulation shows that the moving mechanism based on strategy and cost improves the level of cooperation in a wide parameter space. This occurs because movement based on strategy effectively keeps the cooperative clusters and because movement based on cost effectively regulates the rate of movement. Both types of movement provide a favorable guarantee for the evolution of stable cooperation under the mutation rate q = 0.0. In addition, we discuss the effectiveness of the migration mechanism in the evolution of cooperation under the mutation rate q = 0.001. The result indicates that a higher level of cooperation is obtained at a lower migration cost, whereas cooperation is suppressed at a higher migration cost. Our work may provide an effective method for understanding the emergence of cooperation in our society

  19. Evolution of a minimal parallel programming model

    International Nuclear Information System (INIS)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-01-01

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generality and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.

  20. Parallel Jacobi EVD Methods on Integrated Circuits

    Directory of Open Access Journals (Sweden)

    Chi-Chia Sun

    2014-01-01

    Full Text Available Design strategies for parallel iterative algorithms are presented. In order to further study different tradeoff strategies in design criteria for integrated circuits, A 10 × 10 Jacobi Brent-Luk-EVD array with the simplified μ-CORDIC processor is used as an example. The experimental results show that using the μ-CORDIC processor is beneficial for the design criteria as it yields a smaller area, faster overall computation time, and less energy consumption than the regular CORDIC processor. It is worth to notice that the proposed parallel EVD method can be applied to real-time and low-power array signal processing algorithms performing beamforming or DOA estimation.

  1. Co-Evolution of Opinion and Strategy in Persuasion Dynamics:. AN Evolutionary Game Theoretical Approach

    Science.gov (United States)

    Ding, Fei; Liu, Yun; Li, Yong

    In this paper, a new model of opinion formation within the framework of evolutionary game theory is presented. The model simulates strategic situations when people are in opinion discussion. Heterogeneous agents adjust their behaviors to the environment during discussions, and their interacting strategies evolve together with opinions. In the proposed game, we take into account payoff discount to join a discussion, and the situation that people might drop out of an unpromising game. Analytical and emulational results show that evolution of opinion and strategy always tend to converge, with utility threshold, memory length, and decision uncertainty parameters influencing the convergence time. The model displays different dynamical regimes when we set differently the rule when people are at a loss in strategy.

  2. A parallel adaptive finite element simplified spherical harmonics approximation solver for frequency domain fluorescence molecular imaging

    International Nuclear Information System (INIS)

    Lu Yujie; Zhu Banghe; Rasmussen, John C; Sevick-Muraca, Eva M; Shen Haiou; Wang Ge

    2010-01-01

    Fluorescence molecular imaging/tomography may play an important future role in preclinical research and clinical diagnostics. Time- and frequency-domain fluorescence imaging can acquire more measurement information than the continuous wave (CW) counterpart, improving the image quality of fluorescence molecular tomography. Although diffusion approximation (DA) theory has been extensively applied in optical molecular imaging, high-order photon migration models need to be further investigated to match quantitation provided by nuclear imaging. In this paper, a frequency-domain parallel adaptive finite element solver is developed with simplified spherical harmonics (SP N ) approximations. To fully evaluate the performance of the SP N approximations, a fast time-resolved tetrahedron-based Monte Carlo fluorescence simulator suitable for complex heterogeneous geometries is developed using a convolution strategy to realize the simulation of the fluorescence excitation and emission. The validation results show that high-order SP N can effectively correct the modeling errors of the diffusion equation, especially when the tissues have high absorption characteristics or when high modulation frequency measurements are used. Furthermore, the parallel adaptive mesh evolution strategy improves the modeling precision and the simulation speed significantly on a realistic digital mouse phantom. This solver is a promising platform for fluorescence molecular tomography using high-order approximations to the radiative transfer equation.

  3. Parallel implementations of 2D explicit Euler solvers

    International Nuclear Information System (INIS)

    Giraud, L.; Manzini, G.

    1996-01-01

    In this work we present a subdomain partitioning strategy applied to an explicit high-resolution Euler solver. We describe the design of a portable parallel multi-domain code suitable for parallel environments. We present several implementations on a representative range of MlMD computers that include shared memory multiprocessors, distributed virtual shared memory computers, as well as networks of workstations. Computational results are given to illustrate the efficiency, the scalability, and the limitations of the different approaches. We discuss also the effect of the communication protocol on the optimal domain partitioning strategy for the distributed memory computers

  4. Development of a parallelization strategy for the VARIANT code

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Khalil, H.S.; Palmiotti, G.; Tatsumi, M.

    1996-01-01

    The VARIANT code solves the multigroup steady-state neutron diffusion and transport equation in three-dimensional Cartesian and hexagonal geometries using the variational nodal method. VARIANT consists of four major parts that must be executed sequentially: input handling, calculation of response matrices, solution algorithm (i.e. inner-outer iteration), and output of results. The objective of the parallelization effort was to reduce the overall computing time by distributing the work of the two computationally intensive (sequential) tasks, the coupling coefficient calculation and the iterative solver, equally among a group of processors. This report describes the code's calculations and gives performance results on one of the benchmark problems used to test the code. The performance analysis in the IBM SPx system shows good efficiency for well-load-balanced programs. Even for relatively small problem sizes, respectable efficiencies are seen for the SPx. An extension to achieve a higher degree of parallelism will be addressed in future work. 7 refs., 1 tab

  5. The evolution of Soviet forces, strategy, and command

    International Nuclear Information System (INIS)

    Ball, D.; Bethe, H.A.; Blair, B.G.; Bracken, P.; Carter, A.B.; Dickinson, H.; Garwin, R.L.; Holloway, D.; Kendall, H.W.

    1988-01-01

    This paper reports on the evolution of Soviet forces, strategy and command. Soviet leaders have repeatedly emphasized that it would be tantamount to suicide to start a nuclear war. Mutual deterrence, however, does not make nuclear was impossible. The danger remains that a large-scale nuclear was could start inadvertently in an intense crisis, or by escalation out of a conventional war, or as an unforeseen combination of these. For these reasons crisis management has become a central issue in the United States, but the standard Soviet response to this Western interest has been to say that what is needed is crisis avoidance, not recipes for brinkmanship masquerading under another name. There is much sense in this view. Nevertheless, this demeanor does not mean that the Soviet Union has given no thought to the danger that a crisis might lead to nuclear war, only that Soviet categories for thinking about such matters differ from those employed in the United States

  6. Implementation of Evolution Strategies (ES Algorithm to Optimization Lovebird Feed Composition

    Directory of Open Access Journals (Sweden)

    Agung Mustika Rizki

    2017-05-01

    Full Text Available Lovebird current society, especially popular among bird lovers. Some people began to try to develop the cultivation of these birds. In the cultivation process to consider the composition of feed to produce a quality bird. Determining the feed is not easy because it must consider the cost and need for vitamin Lovebird. This problem can be solved by the algorithm Evolution Strategies (ES. Based on test results obtained optimal fitness value of 0.3125 using a population size of 100 and optimal fitness value of 0.3267 in the generation of 1400. 

  7. Language constructs for modular parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  8. Evolution of strategies and competition in the international airline industry: a practical analysis using porter's competitive forces model

    OpenAIRE

    Zannoni, Niccolò

    2013-01-01

    This master thesis describes the evolution of the competition and strategies in the international airline industry. It studies the industry before and after deregulation, using the competitive forces model.

  9. Dataflow Query Execution in a Parallel Main-Memory Environment

    NARCIS (Netherlands)

    Wilschut, A.N.; Apers, Peter M.G.

    1991-01-01

    The performance and characteristics of the execution of various join-trees on a parallel DBMS are studied. The results are a step in the direction of the design of a query optimization strategy that is fit for parallel execution of complex queries. Among others, synchronization issues are identified

  10. Strategy Dynamics through a Demand-Based Lens: The Evolution of Market Boundaries, Resource Rents and Competitive Positions

    OpenAIRE

    Adner, Ron; Zemsky, Peter

    2003-01-01

    We develop a novel approach to the dynamics of business strategy that is grounded in an explicit treatment of consumer choice when technologies improve over time. We address the evolution of market boundaries, resource rents and competitive positions by adapting models of competition with differentiated products. Our model is consistent with the central strategy assertion that competitive interactions are governed by superior value creation and competitive advantage. More importantly, it show...

  11. Experimental evolution and the dynamics of adaptation and genome evolution in microbial populations.

    Science.gov (United States)

    Lenski, Richard E

    2017-10-01

    Evolution is an on-going process, and it can be studied experimentally in organisms with rapid generations. My team has maintained 12 populations of Escherichia coli in a simple laboratory environment for >25 years and 60 000 generations. We have quantified the dynamics of adaptation by natural selection, seen some of the populations diverge into stably coexisting ecotypes, described changes in the bacteria's mutation rate, observed the new ability to exploit a previously untapped carbon source, characterized the dynamics of genome evolution and used parallel evolution to identify the genetic targets of selection. I discuss what the future might hold for this particular experiment, briefly highlight some other microbial evolution experiments and suggest how the fields of experimental evolution and microbial ecology might intersect going forward.

  12. Dataflow Query Execution in a Parallel, Main-memory Environment

    NARCIS (Netherlands)

    Wilschut, A.N.; Apers, Peter M.G.

    In this paper, the performance and characteristics of the execution of various join-trees on a parallel DBMS are studied. The results of this study are a step into the direction of the design of a query optimization strategy that is fit for parallel execution of complex queries. Among others,

  13. Laboratory Evolution to Alternating Substrate Environments Yields Distinct Phenotypic and Genetic Adaptive Strategies

    DEFF Research Database (Denmark)

    Sandberg, Troy E.; Lloyd, Colton J.; Palsson, Bernhard O.

    2017-01-01

    conditions and different adaptation strategies depending on the substrates being switched between; in some environments, a persistent "generalist" strain developed, while in another, two "specialist" subpopulations arose that alternated dominance. Diauxic lag phenotype varied across the generalists...... maintain simple, static culturing environments so as to reduce selection pressure complexity. In this study, we investigated the adaptive strategies underlying evolution to fluctuating environments by evolving Escherichia coli to conditions of frequently switching growth substrate. Characterization...... of evolved strains via a number of different data types revealed the various genetic and phenotypic changes implemented in pursuit of growth optimality and how these differed across the different growth substrates and switching protocols. This work not only helps to establish general principles of adaptation...

  14. Mini-review: Strategies for Variation and Evolution of Bacterial Antigens

    Science.gov (United States)

    Foley, Janet

    2015-01-01

    Across the eubacteria, antigenic variation has emerged as a strategy to evade host immunity. However, phenotypic variation in some of these antigens also allows the bacteria to exploit variable host niches as well. The specific mechanisms are not shared-derived characters although there is considerable convergent evolution and numerous commonalities reflecting considerations of natural selection and biochemical restraints. Unlike in viruses, mechanisms of antigenic variation in most bacteria involve larger DNA movement such as gene conversion or DNA rearrangement, although some antigens vary due to point mutations or modified transcriptional regulation. The convergent evolution that promotes antigenic variation integrates various evolutionary forces: these include mutations underlying variant production; drift which could remove alleles especially early in infection or during life history phases in arthropod vectors (when the bacterial population size goes through a bottleneck); selection not only for any particular variant but also for the mechanism for the production of variants (i.e., selection for mutability); and overcoming negative selection against variant production. This review highlights the complexities of drivers of antigenic variation, in particular extending evaluation beyond the commonly cited theory of immune evasion. A deeper understanding of the diversity of purpose and mechanisms of antigenic variation in bacteria will contribute to greater insight into bacterial pathogenesis, ecology and coevolution with hosts. PMID:26288700

  15. Directed evolution combined with synthetic biology strategies expedite semi-rational engineering of genes and genomes.

    Science.gov (United States)

    Kang, Zhen; Zhang, Junli; Jin, Peng; Yang, Sen

    2015-01-01

    Owing to our limited understanding of the relationship between sequence and function and the interaction between intracellular pathways and regulatory systems, the rational design of enzyme-coding genes and de novo assembly of a brand-new artificial genome for a desired functionality or phenotype are difficult to achieve. As an alternative approach, directed evolution has been widely used to engineer genomes and enzyme-coding genes. In particular, significant developments toward DNA synthesis, DNA assembly (in vitro or in vivo), recombination-mediated genetic engineering, and high-throughput screening techniques in the field of synthetic biology have been matured and widely adopted, enabling rapid semi-rational genome engineering to generate variants with desired properties. In this commentary, these novel tools and their corresponding applications in the directed evolution of genomes and enzymes are discussed. Moreover, the strategies for genome engineering and rapid in vitro enzyme evolution are also proposed.

  16. Using 2-Opt based evolution strategy for travelling salesman problem

    Directory of Open Access Journals (Sweden)

    Kenan Karagul

    2016-03-01

    Full Text Available Harmony search algorithm that matches the (µ+1 evolution strategy, is a heuristic method simulated by the process of music improvisation. In this paper, a harmony search algorithm is directly used for the travelling salesman problem. Instead of conventional selection operators such as roulette wheel, the transformation of real number values of harmony search algorithm to order index of vertex representation and improvement of solutions are obtained by using the 2-Opt local search algorithm. Then, the obtained algorithm is tested on two different parameter groups of TSPLIB. The proposed method is compared with classical 2-Opt which randomly started at each step and best known solutions of test instances from TSPLIB. It is seen that the proposed algorithm offers valuable solutions.

  17. Parallel computing solution of Boltzmann neutron transport equation

    International Nuclear Information System (INIS)

    Ansah-Narh, T.

    2010-01-01

    The focus of the research was on developing parallel computing algorithm for solving Eigen-values of the Boltzmam Neutron Transport Equation (BNTE) in a slab geometry using multi-grid approach. In response to the problem of slow execution of serial computing when solving large problems, such as BNTE, the study was focused on the design of parallel computing systems which was an evolution of serial computing that used multiple processing elements simultaneously to solve complex physical and mathematical problems. Finite element method (FEM) was used for the spatial discretization scheme, while angular discretization was accomplished by expanding the angular dependence in terms of Legendre polynomials. The eigenvalues representing the multiplication factors in the BNTE were determined by the power method. MATLAB Compiler Version 4.1 (R2009a) was used to compile the MATLAB codes of BNTE. The implemented parallel algorithms were enabled with matlabpool, a Parallel Computing Toolbox function. The option UseParallel was set to 'always' and the default value of the option was 'never'. When those conditions held, the solvers computed estimated gradients in parallel. The parallel computing system was used to handle all the bottlenecks in the matrix generated from the finite element scheme and each domain of the power method generated. The parallel algorithm was implemented on a Symmetric Multi Processor (SMP) cluster machine, which had Intel 32 bit quad-core x 86 processors. Convergence rates and timings for the algorithm on the SMP cluster machine were obtained. Numerical experiments indicated the designed parallel algorithm could reach perfect speedup and had good stability and scalability. (au)

  18. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    Science.gov (United States)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    distribution algorithms. pp. 75-102, Springer Kern, S., N. Hansen and P. Koumoutsakos (2006). Local Meta-Models for Optimization Using Evolution Strategies. In Ninth International Conference on Parallel Problem Solving from Nature PPSN IX, Proceedings, pp.939-948, Berlin: Springer. Tahk, M., Woo, H., and Park. M, (2007). A hybrid optimization of evolutionary and gradient search. Engineering Optimization, (39), 87-104.

  19. The genetic architecture of parallel armor plate reduction in threespine sticklebacks.

    Directory of Open Access Journals (Sweden)

    Pamela F Colosimo

    2004-05-01

    Full Text Available How many genetic changes control the evolution of new traits in natural populations? Are the same genetic changes seen in cases of parallel evolution? Despite long-standing interest in these questions, they have been difficult to address, particularly in vertebrates. We have analyzed the genetic basis of natural variation in three different aspects of the skeletal armor of threespine sticklebacks (Gasterosteus aculeatus: the pattern, number, and size of the bony lateral plates. A few chromosomal regions can account for variation in all three aspects of the lateral plates, with one major locus contributing to most of the variation in lateral plate pattern and number. Genetic mapping and allelic complementation experiments show that the same major locus is responsible for the parallel evolution of armor plate reduction in two widely separated populations. These results suggest that a small number of genetic changes can produce major skeletal alterations in natural populations and that the same major locus is used repeatedly when similar traits evolve in different locations.

  20. Parallel science and engineering applications the Charm++ approach

    CERN Document Server

    Kale, Laxmikant V

    2016-01-01

    Developed in the context of science and engineering applications, with each abstraction motivated by and further honed by specific application needs, Charm++ is a production-quality system that runs on almost all parallel computers available. Parallel Science and Engineering Applications: The Charm++ Approach surveys a diverse and scalable collection of science and engineering applications, most of which are used regularly on supercomputers by scientists to further their research. After a brief introduction to Charm++, the book presents several parallel CSE codes written in the Charm++ model, along with their underlying scientific and numerical formulations, explaining their parallelization strategies and parallel performance. These chapters demonstrate the versatility of Charm++ and its utility for a wide variety of applications, including molecular dynamics, cosmology, quantum chemistry, fracture simulations, agent-based simulations, and weather modeling. The book is intended for a wide audience of people i...

  1. Parallel pic plasma simulation through particle decomposition techniques

    International Nuclear Information System (INIS)

    Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'

    1998-02-01

    Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it

  2. The Transformation of Cyavana: A Case Study in Narrative Evolution

    Directory of Open Access Journals (Sweden)

    Emily West

    2017-03-01

    Full Text Available The assessment of possible genetic relationships between pairs of proposed narrative parallels currently relies on subjective conventional wisdom-based criteria. This essay presents an attempt at categorizing patterns of narrative evolution through the comparison of variants of orally-composed, fixed-text Sanskrit tales. Systematic examination of the changes that took place over the developmental arc of _The Tale of Cyavana_ offers a number of insights that may be applied to the understanding of the evolution of oral narratives in general. An evidence-based exposition of the principles that govern the process of narrative evolution could provide more accurate diagnostic tools for evaluating narrative parallels.

  3. Fluorous Parallel Synthesis of A Hydantoin/Thiohydantoin Library

    OpenAIRE

    Lu, Yimin; Zhang, Wei

    2005-01-01

    Fluorous tagging strategy is applied to solution-phase parallel synthesis of a library containing hydantoin and thiohydantoin analogs. Two perfluoroalkyl (Rf)-tagged α-amino esters each react with 6 aromatic aldehydes under reductive amination conditions. Twelve amino esters then each react with 10 isocyanates and isothiocyanates in parallel. The resulting 120 ureas and thioureas undergo spontaneous cyclization to form the corresponding hydantoins and thiohydantoins. The intermediate and fina...

  4. Parallel genetic algorithms with migration for the hybrid flow shop scheduling problem

    Directory of Open Access Journals (Sweden)

    K. Belkadi

    2006-01-01

    Full Text Available This paper addresses scheduling problems in hybrid flow shop-like systems with a migration parallel genetic algorithm (PGA_MIG. This parallel genetic algorithm model allows genetic diversity by the application of selection and reproduction mechanisms nearer to nature. The space structure of the population is modified by dividing it into disjoined subpopulations. From time to time, individuals are exchanged between the different subpopulations (migration. Influence of parameters and dedicated strategies are studied. These parameters are the number of independent subpopulations, the interconnection topology between subpopulations, the choice/replacement strategy of the migrant individuals, and the migration frequency. A comparison between the sequential and parallel version of genetic algorithm (GA is provided. This comparison relates to the quality of the solution and the execution time of the two versions. The efficiency of the parallel model highly depends on the parameters and especially on the migration frequency. In the same way this parallel model gives a significant improvement of computational time if it is implemented on a parallel architecture which offers an acceptable number of processors (as many processors as subpopulations.

  5. Linguistics: evolution and language change.

    Science.gov (United States)

    Bowern, Claire

    2015-01-05

    Linguists have long identified sound changes that occur in parallel. Now novel research shows how Bayesian modeling can capture complex concerted changes, revealing how evolution of sounds proceeds. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Rapid sequencing of the bamboo mitochondrial genome using Illumina technology and parallel episodic evolution of organelle genomes in grasses.

    Science.gov (United States)

    Ma, Peng-Fei; Guo, Zhen-Hua; Li, De-Zhu

    2012-01-01

    Compared to their counterparts in animals, the mitochondrial (mt) genomes of angiosperms exhibit a number of unique features. However, unravelling their evolution is hindered by the few completed genomes, of which are essentially Sanger sequenced. While next-generation sequencing technologies have revolutionized chloroplast genome sequencing, they are just beginning to be applied to angiosperm mt genomes. Chloroplast genomes of grasses (Poaceae) have undergone episodic evolution and the evolutionary rate was suggested to be correlated between chloroplast and mt genomes in Poaceae. It is interesting to investigate whether correlated rate change also occurred in grass mt genomes as expected under lineage effects. A time-calibrated phylogenetic tree is needed to examine rate change. We determined a largely completed mt genome from a bamboo, Ferrocalamus rimosivaginus (Poaceae), through Illumina sequencing of total DNA. With combination of de novo and reference-guided assembly, 39.5-fold coverage Illumina reads were finally assembled into scaffolds totalling 432,839 bp. The assembled genome contains nearly the same genes as the completed mt genomes in Poaceae. For examining evolutionary rate in grass mt genomes, we reconstructed a phylogenetic tree including 22 taxa based on 31 mt genes. The topology of the well-resolved tree was almost identical to that inferred from chloroplast genome with only minor difference. The inconsistency possibly derived from long branch attraction in mtDNA tree. By calculating absolute substitution rates, we found significant rate change (∼4-fold) in mt genome before and after the diversification of Poaceae both in synonymous and nonsynonymous terms. Furthermore, the rate change was correlated with that of chloroplast genomes in grasses. Our result demonstrates that it is a rapid and efficient approach to obtain angiosperm mt genome sequences using Illumina sequencing technology. The parallel episodic evolution of mt and chloroplast

  7. Parallel sites implicate functional convergence of the hearing gene prestin among echolocating mammals.

    Science.gov (United States)

    Liu, Zhen; Qi, Fei-Yan; Zhou, Xin; Ren, Hai-Qing; Shi, Peng

    2014-09-01

    Echolocation is a sensory system whereby certain mammals navigate and forage using sound waves, usually in environments where visibility is limited. Curiously, echolocation has evolved independently in bats and whales, which occupy entirely different environments. Based on this phenotypic convergence, recent studies identified several echolocation-related genes with parallel sites at the protein sequence level among different echolocating mammals, and among these, prestin seems the most promising. Although previous studies analyzed the evolutionary mechanism of prestin, the functional roles of the parallel sites in the evolution of mammalian echolocation are not clear. By functional assays, we show that a key parameter of prestin function, 1/α, is increased in all echolocating mammals and that the N7T parallel substitution accounted for this functional convergence. Moreover, another parameter, V1/2, was shifted toward the depolarization direction in a toothed whale, the bottlenose dolphin (Tursiops truncatus) and a constant-frequency (CF) bat, the Stoliczka's trident bat (Aselliscus stoliczkanus). The parallel site of I384T between toothed whales and CF bats was responsible for this functional convergence. Furthermore, the two parameters (1/α and V1/2) were correlated with mammalian high-frequency hearing, suggesting that the convergent changes of the prestin function in echolocating mammals may play important roles in mammalian echolocation. To our knowledge, these findings present the functional patterns of echolocation-related genes in echolocating mammals for the first time and rigorously demonstrate adaptive parallel evolution at the protein sequence level, paving the way to insights into the molecular mechanism underlying mammalian echolocation. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  9. Evidence of Parallel Processing During Translation

    DEFF Research Database (Denmark)

    Balling, Laura Winther; Hvelplund, Kristian Tangsgaard; Sjørup, Annette Camilla

    2014-01-01

    conclude that translation is a parallel process and that literal translation is likely to be a universal initial default strategy in translation. This conclusion is strengthened by the fact that all three experiments were relatively naturalistic, due to the combination of remote eye tracking and mixed...

  10. Comparative eye-tracking evaluation of scatterplots and parallel coordinates

    Directory of Open Access Journals (Sweden)

    Rudolf Netzel

    2017-06-01

    Full Text Available We investigate task performance and reading characteristics for scatterplots (Cartesian coordinates and parallel coordinates. In a controlled eye-tracking study, we asked 24 participants to assess the relative distance of points in multidimensional space, depending on the diagram type (parallel coordinates or a horizontal collection of scatterplots, the number of data dimensions (2, 4, 6, or 8, and the relative distance between points (15%, 20%, or 25%. For a given reference point and two target points, we instructed participants to choose the target point that was closer to the reference point in multidimensional space. We present a visual scanning model that describes different strategies to solve this retrieval task for both diagram types, and propose corresponding hypotheses that we test using task completion time, accuracy, and gaze positions as dependent variables. Our results show that scatterplots outperform parallel coordinates significantly in 2 dimensions, however, the task was solved more quickly and more accurately with parallel coordinates in 8 dimensions. The eye-tracking data further shows significant differences between Cartesian and parallel coordinates, as well as between different numbers of dimensions. For parallel coordinates, there is a clear trend toward shorter fixations and longer saccades with increasing number of dimensions. Using an area-of-interest (AOI based approach, we identify different reading strategies for each diagram type: For parallel coordinates, the participants’ gaze frequently jumped back and forth between pairs of axes, while axes were rarely focused on when viewing Cartesian coordinates. We further found that participants’ attention is biased: toward the center of the whole plotfor parallel coordinates and skewed to the center/left side for Cartesian coordinates. We anticipate that these results may support the design of more effective visualizations for multidimensional data.

  11. An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Haiyan Gu

    2018-04-01

    Full Text Available Remote sensing (RS image segmentation is an essential step in geographic object-based image analysis (GEOBIA to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA. Specifically, a minimum spanning tree (MST algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 and different selected landscapes (residential/industrial, residential/agriculture covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral, while the accuracy is comparable with that of the FNEA method.

  12. Regional-scale calculation of the LS factor using parallel processing

    Science.gov (United States)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  13. Teaching evolution (and all of biology) more effectively: Strategies for engagement, critical reasoning, and confronting misconceptions.

    Science.gov (United States)

    Nelson, Craig E

    2008-08-01

    The strength of the evidence supporting evolution has increased markedly since the discovery of DNA but, paradoxically, public resistance to accepting evolution seems to have become stronger. A key dilemma is that science faculty have often continued to teach evolution ineffectively, even as the evidence that traditional ways of teaching are inferior has become stronger and stronger. Three pedagogical strategies that together can make a large difference in students' understanding and acceptance of evolution are extensive use of interactive engagement, a focus on critical thinking in science (especially on comparisons and explicit criteria) and using both of these in helping the students actively compare their initial conceptions (and publicly popular misconceptions) with more fully scientific conceptions. The conclusion that students' misconceptions must be dealt with systematically can be difficult for faculty who are teaching evolution since much of the students' resistance is framed in religious terms and one might be reluctant to address religious ideas in class. Applications to teaching evolution are illustrated with examples that address criteria and critical thinking, standard geology versus flood geology, evolutionary developmental biology versus organs of extreme perfection, and the importance of using humans as a central example. It is also helpful to bridge the false dichotomy, seen by many students, between atheistic evolution versus religious creationism. These applications are developed in detail and are intended to be sufficient to allow others to use these approaches in their teaching. Students and other faculty were quite supportive of these approaches as implemented in my classes.

  14. Niche-driven evolution of metabolic and life-history strategies in natural and domesticated populations of Saccharomyces cerevisiae

    Directory of Open Access Journals (Sweden)

    Sicard Delphine

    2009-12-01

    Full Text Available Abstract Background Variation of resource supply is one of the key factors that drive the evolution of life-history strategies, and hence the interactions between individuals. In the yeast Saccharomyces cerevisiae, two life-history strategies related to different resource utilization have been previously described in strains from different industrial origins. In this work, we analyzed metabolic traits and life-history strategies in a broader collection of yeast strains sampled in various ecological niches (forest, human body, fruits, laboratory and industrial environments. Results By analysing the genetic and plastic variation of six life-history and three metabolic traits, we showed that S. cerevisiae populations harbour different strategies depending on their ecological niches. On one hand, the forest and laboratory strains, referred to as extreme "ants", reproduce quickly, reach a large carrying capacity and a small cell size in fermentation, but have a low reproduction rate in respiration. On the other hand, the industrial strains, referred to as extreme "grasshoppers", reproduce slowly, reach a small carrying capacity but have a big cell size in fermentation and a high reproduction rate in respiration. "Grasshoppers" have usually higher glucose consumption rate than "ants", while they produce lower quantities of ethanol, suggesting that they store cell resources rather than secreting secondary products to cross-feed or poison competitors. The clinical and fruit strains are intermediate between these two groups. Conclusions Altogether, these results are consistent with a niche-driven evolution of S. cerevisiae, with phenotypic convergence of populations living in similar habitat. They also revealed that competition between strains having contrasted life-history strategies ("ants" and "grasshoppers" seems to occur at low frequency or be unstable since opposite life-history strategies appeared to be maintained in distinct ecological niches.

  15. Prototyping and Simulating Parallel, Distributed Computations with VISA

    National Research Council Canada - National Science Library

    Demeure, Isabelle M; Nutt, Gary J

    1989-01-01

    ...] to support the design, prototyping, and simulation of parallel, distributed computations. In particular, VISA is meant to guide the choice of partitioning and communication strategies for such computations, based on their performance...

  16. Biodiversity Meets Neuroscience: From the Sequencing Ship (Ship-Seq) to Deciphering Parallel Evolution of Neural Systems in Omic's Era.

    Science.gov (United States)

    Moroz, Leonid L

    2015-12-01

    The origins of neural systems and centralized brains are one of the major transitions in evolution. These events might occur more than once over 570-600 million years. The convergent evolution of neural circuits is evident from a diversity of unique adaptive strategies implemented by ctenophores, cnidarians, acoels, molluscs, and basal deuterostomes. But, further integration of biodiversity research and neuroscience is required to decipher critical events leading to development of complex integrative and cognitive functions. Here, we outline reference species and interdisciplinary approaches in reconstructing the evolution of nervous systems. In the "omic" era, it is now possible to establish fully functional genomics laboratories aboard of oceanic ships and perform sequencing and real-time analyses of data at any oceanic location (named here as Ship-Seq). In doing so, fragile, rare, cryptic, and planktonic organisms, or even entire marine ecosystems, are becoming accessible directly to experimental and physiological analyses by modern analytical tools. Thus, we are now in a position to take full advantages from countless "experiments" Nature performed for us in the course of 3.5 billion years of biological evolution. Together with progress in computational and comparative genomics, evolutionary neuroscience, proteomic and developmental biology, a new surprising picture is emerging that reveals many ways of how nervous systems evolved. As a result, this symposium provides a unique opportunity to revisit old questions about the origins of biological complexity. © The Author 2015. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  17. An object-oriented programming paradigm for parallelization of computational fluid dynamics

    International Nuclear Information System (INIS)

    Ohta, Takashi.

    1997-03-01

    We propose an object-oriented programming paradigm for parallelization of scientific computing programs, and show that the approach can be a very useful strategy. Generally, parallelization of scientific programs tends to be complicated and unportable due to the specific requirements of each parallel computer or compiler. In this paper, we show that the object-oriented programming design, which separates the parallel processing parts from the solver of the applications, can achieve the large improvement in the maintenance of the codes, as well as the high portability. We design the program for the two-dimensional Euler equations according to the paradigm, and evaluate the parallel performance on IBM SP2. (author)

  18. Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors

    Science.gov (United States)

    Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.

    1990-01-01

    Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.

  19. Decentralized Interleaving of Paralleled Dc-Dc Buck Converters: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Brian B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Rodriguez, Miguel [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sinha, Mohit [University of Minnesota; Dhople, Sairaj [University of Minnesota; Poon, Jason [University of California at Berkeley

    2017-09-01

    We present a decentralized control strategy that yields switch interleaving among parallel connected dc-dc buck converters without communication. The proposed method is based on the digital implementation of the dynamics of a nonlinear oscillator circuit as the controller. Each controller is fully decentralized, i.e., it only requires the locally measured output current to synthesize the pulse width modulation (PWM) carrier waveform. By virtue of the intrinsic electrical coupling between converters, the nonlinear oscillator-based controllers converge to an interleaved state with uniform phase-spacing across PWM carriers. To the knowledge of the authors, this work represents the first fully decentralized strategy for switch interleaving of paralleled dc-dc buck converters.

  20. Musical emotions: Functions, origins, evolution

    Science.gov (United States)

    Perlovsky, Leonid

    2010-03-01

    Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in

  1. Local and Nonlocal Parallel Heat Transport in General Magnetic Fields

    International Nuclear Information System (INIS)

    Castillo-Negrete, D. del; Chacon, L.

    2011-01-01

    A novel approach for the study of parallel transport in magnetized plasmas is presented. The method avoids numerical pollution issues of grid-based formulations and applies to integrable and chaotic magnetic fields with local or nonlocal parallel closures. In weakly chaotic fields, the method gives the fractal structure of the devil's staircase radial temperature profile. In fully chaotic fields, the temperature exhibits self-similar spatiotemporal evolution with a stretched-exponential scaling function for local closures and an algebraically decaying one for nonlocal closures. It is shown that, for both closures, the effective radial heat transport is incompatible with the quasilinear diffusion model.

  2. Biodiversity Meets Neuroscience: From the Sequencing Ship (Ship-Seq) to Deciphering Parallel Evolution of Neural Systems in Omic’s Era

    Science.gov (United States)

    Moroz, Leonid L.

    2015-01-01

    The origins of neural systems and centralized brains are one of the major transitions in evolution. These events might occur more than once over 570–600 million years. The convergent evolution of neural circuits is evident from a diversity of unique adaptive strategies implemented by ctenophores, cnidarians, acoels, molluscs, and basal deuterostomes. But, further integration of biodiversity research and neuroscience is required to decipher critical events leading to development of complex integrative and cognitive functions. Here, we outline reference species and interdisciplinary approaches in reconstructing the evolution of nervous systems. In the “omic” era, it is now possible to establish fully functional genomics laboratories aboard of oceanic ships and perform sequencing and real-time analyses of data at any oceanic location (named here as Ship-Seq). In doing so, fragile, rare, cryptic, and planktonic organisms, or even entire marine ecosystems, are becoming accessible directly to experimental and physiological analyses by modern analytical tools. Thus, we are now in a position to take full advantages from countless “experiments” Nature performed for us in the course of 3.5 billion years of biological evolution. Together with progress in computational and comparative genomics, evolutionary neuroscience, proteomic and developmental biology, a new surprising picture is emerging that reveals many ways of how nervous systems evolved. As a result, this symposium provides a unique opportunity to revisit old questions about the origins of biological complexity. PMID:26163680

  3. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  4. The evolution of concepts of vestibular peripheral information processing: toward the dynamic, adaptive, parallel processing macular model

    Science.gov (United States)

    Ross, Muriel D.

    2003-01-01

    In a letter to Robert Hooke, written on 5 February, 1675, Isaac Newton wrote "If I have seen further than certain other men it is by standing upon the shoulders of giants." In his context, Newton was referring to the work of Galileo and Kepler, who preceded him. However, every field has its own giants, those men and women who went before us and, often with few tools at their disposal, uncovered the facts that enabled later researchers to advance knowledge in a particular area. This review traces the history of the evolution of views from early giants in the field of vestibular research to modern concepts of vestibular organ organization and function. Emphasis will be placed on the mammalian maculae as peripheral processors of linear accelerations acting on the head. This review shows that early, correct findings were sometimes unfortunately disregarded, impeding later investigations into the structure and function of the vestibular organs. The central themes are that the macular organs are highly complex, dynamic, adaptive, distributed parallel processors of information, and that historical references can help us to understand our own place in advancing knowledge about their complicated structure and functions.

  5. Optimization Algorithms for Calculation of the Joint Design Point in Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1992-01-01

    In large structures it is often necessary to estimate the reliability of the system by use of parallel systems. Optimality criteria-based algorithms for calculation of the joint design point in a parallel system are described and efficient active set strategies are developed. Three possible...

  6. The RNA-world and co-evolution hypothesis and the origin of life: Implications, research strategies and perspectives

    Science.gov (United States)

    Lahav, Noam

    1993-01-01

    The applicability of the RNA-world and co-evolution hypothesis to the study of the very first stages of the origin of life is discussed. The discussion focuses on the basic differences between the two hypotheses and their implications, with regard to the reconstruction methodology, ribosome emergence, balance between ribozymes and protein enzymes, and their major difficultites. Additional complexities of the two hypotheses, such as membranes and the energy source of the first reactions, are not treated in the present work. A central element in the proposed experimental strategies is the study of the catalytic activites of very small peptides and RNA-like oligomers, according to existing, as well as to yet-to-be-invented scenarios of the two hypothesis under consideration. It is suggested that the novel directed molecular evolution technology, and molecular computational modeling, can be applied to this research. This strategy is assumed to be essential for the suggested goal of future studies of the origin of life, namely, the establishment of a `Primordial Darwinian entity'.

  7. The RNA-world and co-evolution hypotheses and the origin of life: Implications, research strategies and perspectives

    Science.gov (United States)

    Lahav, Noam

    1993-12-01

    The applicability of the RNA-world and co-evolution hypotheses to the study of the very first stages of the origin of life is discussed. The discussion focuses on the basic differences between the two hypotheses and their implications, with regard to the reconstruction methodology, ribosome emergence, balance between ribozymes and protein enzymes, and their major difficulties. Additional complexities of the two hypotheses, such as membranes and the energy source of the first reactions, are not treated in the present work. A central element in the proposed experimental strategies is the study of the catalytic activities of very small peptides and RNA-like oligomers, according to existing, as well as to yet-to-be-invented scenarios of the two hypotheses under consideration. It is suggested that the noveldirected molecular evolution technology, andmolecular computational modeling, can be applied to this research. This strategy is assumed to be essential for the suggested goal of future studies of the origin of life, namely, the establishment of a ‘Primordial Darwinian entity’.

  8. The evolution of pattern camouflage strategies in waterfowl and game birds.

    Science.gov (United States)

    Marshall, Kate L A; Gluckman, Thanh-Lan

    2015-05-01

    Visual patterns are common in animals. A broad survey of the literature has revealed that different patterns have distinct functions. Irregular patterns (e.g., stipples) typically function in static camouflage, whereas regular patterns (e.g., stripes) have a dual function in both motion camouflage and communication. Moreover, irregular and regular patterns located on different body regions ("bimodal" patterning) can provide an effective compromise between camouflage and communication and/or enhanced concealment via both static and motion camouflage. Here, we compared the frequency of these three pattern types and traced their evolutionary history using Bayesian comparative modeling in aquatic waterfowl (Anseriformes: 118 spp.), which typically escape predators by flight, and terrestrial game birds (Galliformes: 170 spp.), which mainly use a "sit and hide" strategy to avoid predation. Given these life histories, we predicted that selection would favor regular patterning in Anseriformes and irregular or bimodal patterning in Galliformes and that pattern function complexity should increase over the course of evolution. Regular patterns were predominant in Anseriformes whereas regular and bimodal patterns were most frequent in Galliformes, suggesting that patterns with multiple functions are broadly favored by selection over patterns with a single function in static camouflage. We found that the first patterns to evolve were either regular or bimodal in Anseriformes and either irregular or regular in Galliformes. In both orders, irregular patterns could evolve into regular patterns but not the reverse. Our hypothesis of increasing complexity in pattern camouflage function was supported in Galliformes but not in Anseriformes. These results reveal a trajectory of pattern evolution linked to increasing function complexity in Galliformes although not in Anseriformes, suggesting that both ecology and function complexity can have a profound influence on pattern evolution.

  9. A Model for Speedup of Parallel Programs

    Science.gov (United States)

    1997-01-01

    Sanjeev. K Setia . The interaction between mem- ory allocation and adaptive partitioning in message- passing multicomputers. In IPPS 󈨣 Workshop on Job...Scheduling Strategies for Parallel Processing, pages 89{99, 1995. [15] Sanjeev K. Setia and Satish K. Tripathi. A compar- ative analysis of static

  10. Advanced Material Strategies for Next-Generation Additive Manufacturing.

    Science.gov (United States)

    Chang, Jinke; He, Jiankang; Mao, Mao; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen; Chua, Chee-Kai; Zhao, Xin

    2018-01-22

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing.

  11. Advanced Material Strategies for Next-Generation Additive Manufacturing

    Directory of Open Access Journals (Sweden)

    Jinke Chang

    2018-01-01

    Full Text Available Additive manufacturing (AM has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing.

  12. Advanced Material Strategies for Next-Generation Additive Manufacturing

    Science.gov (United States)

    Chang, Jinke; He, Jiankang; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen

    2018-01-01

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing. PMID:29361754

  13. A novel role for Mc1r in the parallel evolution of depigmentation in independent populations of the cavefish Astyanax mexicanus.

    Directory of Open Access Journals (Sweden)

    Joshua B Gross

    2009-01-01

    Full Text Available The evolution of degenerate characteristics remains a poorly understood phenomenon. Only recently has the identification of mutations underlying regressive phenotypes become accessible through the use of genetic analyses. Focusing on the Mexican cave tetra Astyanax mexicanus, we describe, here, an analysis of the brown mutation, which was first described in the literature nearly 40 years ago. This phenotype causes reduced melanin content, decreased melanophore number, and brownish eyes in convergent cave forms of A. mexicanus. Crosses demonstrate non-complementation of the brown phenotype in F(2 individuals derived from two independent cave populations: Pachón and the linked Yerbaniz and Japonés caves, indicating the same locus is responsible for reduced pigmentation in these fish. While the brown mutant phenotype arose prior to the fixation of albinism in Pachón cave individuals, it is unclear whether the brown mutation arose before or after the fixation of albinism in the linked Yerbaniz/Japonés caves. Using a QTL approach combined with sequence and functional analyses, we have discovered that two distinct genetic alterations in the coding sequence of the gene Mc1r cause reduced pigmentation associated with the brown mutant phenotype in these caves. Our analysis identifies a novel role for Mc1r in the evolution of degenerative phenotypes in blind Mexican cavefish. Further, the brown phenotype has arisen independently in geographically separate caves, mediated through different mutations of the same gene. This example of parallelism indicates that certain genes are frequent targets of mutation in the repeated evolution of regressive phenotypes in cave-adapted species.

  14. Evolution of CMS Workload Management Towards Multicore Job Support

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Hernández, J. M. [Madrid, CIEMAT; Khan, F. A. [Quaid-i-Azam U.; Letts, J. [UC, San Diego; Majewski, K. [Fermilab; Rodrigues, A. M. [Fermilab; McCrea, A. [UC, San Diego; Vaandering, E. [Fermilab

    2015-12-23

    The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.

  15. Towards physical principles of biological evolution

    Science.gov (United States)

    Katsnelson, Mikhail I.; Wolf, Yuri I.; Koonin, Eugene V.

    2018-03-01

    Biological systems reach organizational complexity that far exceeds the complexity of any known inanimate objects. Biological entities undoubtedly obey the laws of quantum physics and statistical mechanics. However, is modern physics sufficient to adequately describe, model and explain the evolution of biological complexity? Detailed parallels have been drawn between statistical thermodynamics and the population-genetic theory of biological evolution. Based on these parallels, we outline new perspectives on biological innovation and major transitions in evolution, and introduce a biological equivalent of thermodynamic potential that reflects the innovation propensity of an evolving population. Deep analogies have been suggested to also exist between the properties of biological entities and processes, and those of frustrated states in physics, such as glasses. Such systems are characterized by frustration whereby local state with minimal free energy conflict with the global minimum, resulting in ‘emergent phenomena’. We extend such analogies by examining frustration-type phenomena, such as conflicts between different levels of selection, in biological evolution. These frustration effects appear to drive the evolution of biological complexity. We further address evolution in multidimensional fitness landscapes from the point of view of percolation theory and suggest that percolation at level above the critical threshold dictates the tree-like evolution of complex organisms. Taken together, these multiple connections between fundamental processes in physics and biology imply that construction of a meaningful physical theory of biological evolution might not be a futile effort. However, it is unrealistic to expect that such a theory can be created in one scoop; if it ever comes to being, this can only happen through integration of multiple physical models of evolutionary processes. Furthermore, the existing framework of theoretical physics is unlikely to suffice

  16. Feed-forward volume rendering algorithm for moderately parallel MIMD machines

    Science.gov (United States)

    Yagel, Roni

    1993-01-01

    Algorithms for direct volume rendering on parallel and vector processors are investigated. Volumes are transformed efficiently on parallel processors by dividing the data into slices and beams of voxels. Equal sized sets of slices along one axis are distributed to processors. Parallelism is achieved at two levels. Because each slice can be transformed independently of others, processors transform their assigned slices with no communication, thus providing maximum possible parallelism at the first level. Within each slice, consecutive beams are incrementally transformed using coherency in the transformation computation. Also, coherency across slices can be exploited to further enhance performance. This coherency yields the second level of parallelism through the use of the vector processing or pipelining. Other ongoing efforts include investigations into image reconstruction techniques, load balancing strategies, and improving performance.

  17. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    Science.gov (United States)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  18. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  19. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  20. Modelling and parallel calculation of a kinetic boundary layer

    International Nuclear Information System (INIS)

    Perlat, Jean Philippe

    1998-01-01

    This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr

  1. Design and development of a learning progression about stellar structure and evolution

    Directory of Open Access Journals (Sweden)

    Arturo Colantonio

    2018-06-01

    Full Text Available [This paper is part of the Focused Collection on Astronomy Education Research.] In this paper we discuss the design and development of a learning progression (LP to describe and interpret students’ understanding about stellar structure and evolution (SSE. The LP is built upon three content dimensions: hydrostatic equilibrium; composition and aggregation state; functioning and evolution. The data to build up the levels of the hypothetical LP (LP1 came from a 45-minute, seven-question interview, with 33 high school students previously taught about the topic. The questions were adapted from an existing multiple-choice instrument. Data were analyzed using Minstrell’s “facets” approach. To assess the validity of LP1, we designed a twelve-hour teaching module featuring paper-and-pencil tasks and practical activities to estimate the stellar structure and evolution parameters. Twenty high school students were interviewed before and after the activities using the same interview protocol. Results informed a revision of LP1 (LP2 and, in parallel, of the module. The revised module included supplementary activities corresponding to changes made to LP1. We then assessed LP2 with 30 high school students through the same interview, submitted before and after the teaching intervention. A final version of the LP (LP3 was then developed drawing on students’ emerging reasoning strategies. This paper contributes to research in science education by providing an example of the iterative development of the instruction required to support the student thinking that LPs’ levels describe. Concerning astronomy education research, our findings can inform suitable instructional activities more responsive to students’ reasoning strategies about stellar structure and evolution.

  2. Teaching and Learning: Highlighting the Parallels between Education and Participatory Evaluation.

    Science.gov (United States)

    Vanden Berk, Eric J.; Cassata, Jennifer Coyne; Moye, Melinda J.; Yarbrough, Donald B.; Siddens, Stephanie K.

    As an evaluation team trained in educational psychology and committed to participatory evaluation and its evolution, the researchers have found the parallel between evaluator-stakeholder roles in the participatory evaluation process and educator-student roles in educational psychology theory to be important. One advantage then is that the theories…

  3. Parallelization of MCNP 4, a Monte Carlo neutron and photon transport code system, in highly parallel distributed memory type computer

    International Nuclear Information System (INIS)

    Masukawa, Fumihiro; Takano, Makoto; Naito, Yoshitaka; Yamazaki, Takao; Fujisaki, Masahide; Suzuki, Koichiro; Okuda, Motoi.

    1993-11-01

    In order to improve the accuracy and calculating speed of shielding analyses, MCNP 4, a Monte Carlo neutron and photon transport code system, has been parallelized and measured of its efficiency in the highly parallel distributed memory type computer, AP1000. The code has been analyzed statically and dynamically, then the suitable algorithm for parallelization has been determined for the shielding analysis functions of MCNP 4. This includes a strategy where a new history is assigned to the idling processor element dynamically during the execution. Furthermore, to avoid the congestion of communicative processing, the batch concept, processing multi-histories by a unit, has been introduced. By analyzing a sample cask problem with 2,000,000 histories by the AP1000 with 512 processor elements, the 82 % of parallelization efficiency is achieved, and the calculational speed has been estimated to be around 50 times as fast as that of FACOM M-780. (author)

  4. Parallel algorithms for boundary value problems

    Science.gov (United States)

    Lin, Avi

    1991-01-01

    A general approach to solve boundary value problems numerically in a parallel environment is discussed. The basic algorithm consists of two steps: the local step where all the P available processors work in parallel, and the global step where one processor solves a tridiagonal linear system of the order P. The main advantages of this approach are twofold. First, this suggested approach is very flexible, especially in the local step and thus the algorithm can be used with any number of processors and with any of the SIMD or MIMD machines. Secondly, the communication complexity is very small and thus can be used as easily with shared memory machines. Several examples for using this strategy are discussed.

  5. Neural nets for massively parallel optimization

    Science.gov (United States)

    Dixon, Laurence C. W.; Mills, David

    1992-07-01

    To apply massively parallel processing systems to the solution of large scale optimization problems it is desirable to be able to evaluate any function f(z), z (epsilon) Rn in a parallel manner. The theorem of Cybenko, Hecht Nielsen, Hornik, Stinchcombe and White, and Funahasi shows that this can be achieved by a neural network with one hidden layer. In this paper we address the problem of the number of nodes required in the layer to achieve a given accuracy in the function and gradient values at all points within a given n dimensional interval. The type of activation function needed to obtain nonsingular Hessian matrices is described and a strategy for obtaining accurate minimal networks presented.

  6. The advertising strategies

    Institute of Scientific and Technical Information of China (English)

    YAKOUBI Mohamed Lamine

    2013-01-01

    We will try to demonstrate, through the display of the various advertising creation strategies and their evolution, how the advertising communication passed from of a vision or a strategy focused on the product, to a vision focused on the brand. The first advertising strategy that was applied by advertising agencies is the"Unique Selling Proposition";it focused only on the product advantages and its philosophy dominated the advertising world, throughout its various evolutions, till the nineties but this is without counting the introduction of the new advertising strategies that brought a more brand oriented philosophy to the ground.

  7. A PARALLEL MONTE CARLO CODE FOR SIMULATING COLLISIONAL N-BODY SYSTEMS

    International Nuclear Information System (INIS)

    Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A.

    2013-01-01

    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N ∼ 10 7 particles. Our code is based on the Hénon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 10 5 to 10 7 . We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within ∼ 5 , 128 for N = 10 6 and 256 for N = 10 7 . The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60×, 100×, and 220×, respectively.

  8. Efficient multitasking: parallel versus serial processing of multiple tasks.

    Science.gov (United States)

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  9. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    Science.gov (United States)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  10. Boltzmann machines as a model for parallel annealing

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.

    1991-01-01

    The potential of Boltzmann machines to cope with difficult combinatorial optimization problems is investigated. A discussion of various (parallel) models of Boltzmann machines is given based on the theory of Markov chains. A general strategy is presented for solving (approximately) combinatorial

  11. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    Science.gov (United States)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  12. The Evolution of Enterprise Organization Designs

    OpenAIRE

    Jay R. Galbraith

    2012-01-01

    This article extends Alfred Chandler's seminal ideas about strategy and organizational structure, and it predicts the next stage of organizational evolution. Chandler described the evolution of vertical integration and diversification strategies for which the functional and multidivisional structures are appropriate. He also explained how the dominant structure at any point in time is a concatenation or accumulation of all previous strategies and structures. I extend Chandler's ideas by descr...

  13. Impact of weed control strategies on resistance evolution in Alopecurus myosuroides – a long-term field trial

    Directory of Open Access Journals (Sweden)

    Ulber, Lena

    2016-02-01

    Full Text Available The impact of various herbicide strategies on populations of Alopecurus myosuroides is investigated in a longterm field trial situated in Wendhausen (Germany since 2009. In the initial years of the field experiment, resistant populations were selected by means of repeated application of the same herbicide active ingredients. For the selection of different resistance profiles, herbicides with actives from different HRAC groups were used. The herbicide actives flupyrsulfuron, isoproturon und fenoxaprop-P were applied for two years on large plots. In a succeeding field trial starting in 2011, it was investigated if the now existing resistant field populations could be controlled by various herbicide strategies. Eight different strategies consisting of various herbicide combinations were tested. Resistance evolution was investigated by means of plant counts and molecular genetic analysis.

  14. [The evolution of nursing shortage and strategies to face it: a longitudinal study in 11 hospitals].

    Science.gov (United States)

    Stringhetta, Francesca; Dal Ponte, Adriana; Palese, Alvisa

    2012-01-01

    To describe the perception of the evolution of nursing shortage from 2000 to 2009 according to Nursing Coordinators and the strategies to face it. Nursing coordinators of 11 hospitals or districts of Friuli Venezia Giulia, Trentino Alto Adige and Veneto regions were interviewed in 2000, 2004 and 2009 to collect data and assess their perception on nurses' shortage. In the first interview the medium gap between staff planned and in service was -5.4%; in 2004 -9.4% and in 2009 -3.3%. The shortage, once with a seasonal trend is now constant and appreciated in all the wards. In years 2000 and 2004 on average 5 strategies to face the shortage were implemented, in 2009 7. No systematic strategies have been used with the exception of the unification of wards, mainly during summer for letting people go on holydays. According to Nursing Coordinators the effects of the shortage are already observable (although not quantified) on patients and nurses. The nurses' shortage has been one of the challenges of the last 10 years. Its causes have changed but not the strategies implemented.

  15. Adaptive co-evolution of strategies and network leading to optimal cooperation level in spatial prisoner's dilemma game

    International Nuclear Information System (INIS)

    Han-Shuang, Chen; Zhong-Huai, Hou; Hou-Wen, Xin; Ji-Qian, Zhang

    2010-01-01

    We study evolutionary prisoner's dilemma game on adaptive networks where a population of players co-evolves with their interaction networks. During the co-evolution process, interacted players with opposite strategies either rewire the link between them with probability p or update their strategies with probability 1 – p depending on their payoffs. Numerical simulation shows that the final network is either split into some disconnected communities whose players share the same strategy within each community or forms a single connected network in which all nodes are in the same strategy. Interestingly, the density of cooperators in the final state can be maximised in an intermediate range of p via the competition between time scale of the network dynamics and that of the node dynamics. Finally, the mean-field analysis helps to understand the results of numerical simulation. Our results may provide some insight into understanding the emergence of cooperation in the real situation where the individuals' behaviour and their relationship adaptively co-evolve. (general)

  16. PREDIKSI CHURN DAN SEGMENTASI PELANGGAN MENGGUNAKAN BACKPROPAGATION NEURAL NETWORK BERBASIS EVOLUTION STRATEGIES

    Directory of Open Access Journals (Sweden)

    Junta Zeniarja

    2015-05-01

    Full Text Available Pelanggan merupakan bagian penting dalam memastikan keunggulan dan kelangsungan hidup perusahaan. Oleh karena itu perlu untuk memiliki sistem manajemen untuk memastikan pelanggan tetap setia dan tidak pindah ke pesaing lain, yang dikenal sebagai manajemen churn. Prediksi churn pelanggan adalah bagian dari manajemen churn, yang memprediksi perilaku pelanggan dengan klasifikasi pelanggan setia dan mana yang cenderung pindah ke kompetitor lain. Keakuratan prediksi ini mutlak diperlukan karena tingginya tingkat migrasi pelanggan ke perusahaan pesaing. Hal ini penting karena biaya yang digunakan untuk meraih pelanggan baru jauh lebih tinggi dibandingkan dengan mempertahankan loyalitas pelanggan yang sudah ada. Meskipun banyak studi tentang prediksi churn pelanggan yang telah dilakukan, penelitian lebih lanjut masih diperlukan untuk meningkatkan akurasi prediksi. Penelitian ini akan membahas penggunaan teknik data mining Backpropagation Neural Network (BPNN in hybrid dengan Strategi Evolution (ES untuk atribut bobot. Validasi model dilakukan dengan menggunakan validasi Palang 10-Fold dan evaluasi pengukuran dilakukan dengan menggunakan matriks kebingungan dan Area bawah ROC Curve (AUC. Hasil percobaan menunjukkan bahwa hibrida BPNN dengan ES mencapai kinerja yang lebih baik daripada Basic BPNN. Kata kunci: data mining, churn, prediksi, backpropagation neural network, strategi evolusi.

  17. Genetic algorithm with small population size for search feasible control parameters for parallel hybrid electric vehicles

    Directory of Open Access Journals (Sweden)

    Yu-Huei Cheng

    2017-11-01

    Full Text Available The control strategy is a major unit in hybrid electric vehicles (HEVs. In order to provide suitable control parameters for reducing fuel consumptions and engine emissions while maintaining vehicle performance requirements, the genetic algorithm (GA with small population size is applied to search for feasible control parameters in parallel HEVs. The electric assist control strategy (EACS is used as the fundamental control strategy of parallel HEVs. The dynamic performance requirements stipulated in the Partnership for a New Generation of Vehicles (PNGV is considered to maintain the vehicle performance. The known ADvanced VehIcle SimulatOR (ADVISOR is used to simulate a specific parallel HEV with urban dynamometer driving schedule (UDDS. Five population sets with size 5, 10, 15, 20, and 25 are used in the GA. The experimental results show that the GA with population size of 25 is the best for selecting feasible control parameters in parallel HEVs.

  18. Study on parallel-channel asymmetry in supercritical flow instability experiment

    International Nuclear Information System (INIS)

    Xiong Ting; Yu Junchong; Yan Xiao; Huang Yanping; Xiao Zejun; Huang Shanfang

    2013-01-01

    Due to the urgent need for experimental study on supercritical water flow instability, the parallel-channel asymmetry which determines the feasibility of such experiments was studied with the experimental and numerical results in parallel dual channel. The evolution of flow rates in the experiments was analyzed, and the steady-state characteristics as well as transient characteristics of the system were obtained by self-developed numerical code. The results show that the asymmetry of the parallel dual channel would reduce the feasibility of experiments. The asymmetry of flow rates is aroused by geometrical asymmetry. Due to the property variation characteristics of supercritical water, the flow rate asymmetry is enlarged while rising beyond the pseudo critical point. The extent of flow rate asymmetry is affected by the bulk temperature and total flow rate; therefore the experimental feasibility can be enhanced by reducing the total flow rate. (authors)

  19. Nonlinear interaction of a parallel-flow relativistic electron beam with a plasma

    International Nuclear Information System (INIS)

    Jungwirth, K.; Koerbel, S.; Simon, P.; Vrba, P.

    1975-01-01

    Nonlinear evolution of single-mode high-frequency instabilities (ω approximately ksub(parallel)vsub(b)) excited by a parallel-flow high-current relativistic electron beam in a magnetized plasma is investigated. Fairly general dimensionless equations are derived. They describe both the temporal and the spatial evolution of amplitude and phase of the fundamental wave. Numerically, the special case of excitation of the linearly most unstable mode is solved in detail assuming that the wave energy dissipation is negligible. Then the strength of interaction and the relativistic properties of the beam are fully respected by a single parameter lambda. The value of lambda ensuring the optimum efficiency of the wave excitation as well as the efficiency of the self-acceleration of some beam electrons at higher values of lambda>1 are determined in the case of a fully compensated relativistic beam. Finally, the effect of the return current dissipation is also included (phenomenologically) into the theoretical model, its role for the beam-plasma interaction being checked numerically. (J.U.)

  20. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  1. Parallel simulation of radio-frequency plasma discharges

    International Nuclear Information System (INIS)

    Fivaz, M.; Howling, A.; Ruegsegger, L.; Schwarzenbach, W.; Baeumle, B.

    1994-01-01

    The 1D Particle-In-Cell and Monte Carlo collision code XPDP1 is used to model radio-frequency argon plasma discharges. The code runs faster on a single-user parallel system called MUSIC than on a CRAY-YMP. The low cost of the MUSIC system allows a 24-hours-per-day use and the simulation results are available one to two orders of magnitude quicker than with a super computer shared with other users. The parallelization strategy and its implementation are discussed. Very good agreement is found between simulation results and measurements done in an experimental argon discharge. (author) 2 figs., 3 refs

  2. Art as A Playground for Evolution

    DEFF Research Database (Denmark)

    Beloff, Laura

    2016-01-01

    Art works which engage with the topic of human enhancement and evolution have begun appearing parallel to increased awareness about anthropogenic changes to our environment and acceleration of the speed of technological developments that impact us and our biological environment. The article...... and related topics is proposed as play activity for adults, which simultaneously experiments directly with ideas concerning evolution and human development. The author proposes that these kinds of experimental art projects support our mental adaptation to evolutionary changes....

  3. Parallel heat transport in integrable and chaotic magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Castillo-Negrete, D. del; Chacon, L. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-8071 (United States)

    2012-05-15

    The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion, space plasmas, and astrophysics research. Three issues make this problem particularly challenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), {chi}{sub ||} , and the perpendicular, {chi}{sub Up-Tack }, conductivities ({chi}{sub ||} /{chi}{sub Up-Tack} may exceed 10{sup 10} in fusion plasmas); (ii) Nonlocal parallel transport in the limit of small collisionality; and (iii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates. Motivated by these issues, we present a Lagrangian Green's function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields in arbitrary geometry. The method avoids by construction the numerical pollution issues of grid-based algorithms. The potential of the approach is demonstrated with nontrivial applications to integrable (magnetic island), weakly chaotic (Devil's staircase), and fully chaotic magnetic field configurations. For the latter, numerical solutions of the parallel heat transport equation show that the effective radial transport, with local and non-local parallel closures, is non-diffusive, thus casting doubts on the applicability of quasilinear diffusion descriptions. General conditions for the existence of non-diffusive, multivalued flux-gradient relations in the temperature evolution are derived.

  4. Power-balancing instantaneous optimization energy management for a novel series-parallel hybrid electric bus

    Science.gov (United States)

    Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao

    2012-11-01

    Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.

  5. Selectivity of Nanocrystalline IrO2-Based Catalysts in Parallel Chlorine and Oxygen Evolution

    Czech Academy of Sciences Publication Activity Database

    Kuznetsova, Elizaveta; Petrykin, Valery; Sunde, S.; Krtil, Petr

    2015-01-01

    Roč. 6, č. 2 (2015), s. 198-210 ISSN 1868-2529 EU Projects: European Commission(XE) 214936 Institutional support: RVO:61388955 Keywords : iridium dioxide * oxygen evolution * chlorine evolution Subject RIV: CG - Electrochemistry Impact factor: 2.347, year: 2015

  6. Transformation and diversification in early mammal evolution.

    Science.gov (United States)

    Luo, Zhe-Xi

    2007-12-13

    Evolution of the earliest mammals shows successive episodes of diversification. Lineage-splitting in Mesozoic mammals is coupled with many independent evolutionary experiments and ecological specializations. Classic scenarios of mammalian morphological evolution tend to posit an orderly acquisition of key evolutionary innovations leading to adaptive diversification, but newly discovered fossils show that evolution of such key characters as the middle ear and the tribosphenic teeth is far more labile among Mesozoic mammals. Successive diversifications of Mesozoic mammal groups multiplied the opportunities for many dead-end lineages to iteratively evolve developmental homoplasies and convergent ecological specializations, parallel to those in modern mammal groups.

  7. Population genomics of parallel adaptation in threespine stickleback using sequenced RAD tags.

    Directory of Open Access Journals (Sweden)

    Paul A Hohenlohe

    2010-02-01

    Full Text Available Next-generation sequencing technology provides novel opportunities for gathering genome-scale sequence data in natural populations, laying the empirical foundation for the evolving field of population genomics. Here we conducted a genome scan of nucleotide diversity and differentiation in natural populations of threespine stickleback (Gasterosteus aculeatus. We used Illumina-sequenced RAD tags to identify and type over 45,000 single nucleotide polymorphisms (SNPs in each of 100 individuals from two oceanic and three freshwater populations. Overall estimates of genetic diversity and differentiation among populations confirm the biogeographic hypothesis that large panmictic oceanic populations have repeatedly given rise to phenotypically divergent freshwater populations. Genomic regions exhibiting signatures of both balancing and divergent selection were remarkably consistent across multiple, independently derived populations, indicating that replicate parallel phenotypic evolution in stickleback may be occurring through extensive, parallel genetic evolution at a genome-wide scale. Some of these genomic regions co-localize with previously identified QTL for stickleback phenotypic variation identified using laboratory mapping crosses. In addition, we have identified several novel regions showing parallel differentiation across independent populations. Annotation of these regions revealed numerous genes that are candidates for stickleback phenotypic evolution and will form the basis of future genetic analyses in this and other organisms. This study represents the first high-density SNP-based genome scan of genetic diversity and differentiation for populations of threespine stickleback in the wild. These data illustrate the complementary nature of laboratory crosses and population genomic scans by confirming the adaptive significance of previously identified genomic regions, elucidating the particular evolutionary and demographic history of such

  8. A Unified Differential Evolution Algorithm for Global Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  9. Time-dependent density-functional theory in massively parallel computer architectures: the OCTOPUS project.

    Science.gov (United States)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A L

    2012-06-13

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  10. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    Science.gov (United States)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A.; Oliveira, Micael J. T.; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G.; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A. L.

    2012-06-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  11. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    International Nuclear Information System (INIS)

    Andrade, Xavier; Aspuru-Guzik, Alán; Alberdi-Rodriguez, Joseba; Rubio, Angel; Strubbe, David A; Louie, Steven G; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Marques, Miguel A L

    2012-01-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures. (topical review)

  12. Fundamental Dimensions of Environmental Risk : The Impact of Harsh versus Unpredictable Environments on the Evolution and Development of Life History Strategies.

    Science.gov (United States)

    Ellis, Bruce J; Figueredo, Aurelio José; Brumbach, Barbara H; Schlomer, Gabriel L

    2009-06-01

    The current paper synthesizes theory and data from the field of life history (LH) evolution to advance a new developmental theory of variation in human LH strategies. The theory posits that clusters of correlated LH traits (e.g., timing of puberty, age at sexual debut and first birth, parental investment strategies) lie on a slow-to-fast continuum; that harshness (externally caused levels of morbidity-mortality) and unpredictability (spatial-temporal variation in harshness) are the most fundamental environmental influences on the evolution and development of LH strategies; and that these influences depend on population densities and related levels of intraspecific competition and resource scarcity, on age schedules of mortality, on the sensitivity of morbidity-mortality to the organism's resource-allocation decisions, and on the extent to which environmental fluctuations affect individuals versus populations over short versus long timescales. These interrelated factors operate at evolutionary and developmental levels and should be distinguished because they exert distinctive effects on LH traits and are hierarchically operative in terms of primacy of influence. Although converging lines of evidence support core assumptions of the theory, many questions remain unanswered. This review demonstrates the value of applying a multilevel evolutionary-developmental approach to the analysis of a central feature of human phenotypic variation: LH strategy.

  13. Embodied Evolution in Collective Robotics: A Review

    Directory of Open Access Journals (Sweden)

    Nicolas Bredeche

    2018-02-01

    Full Text Available This article provides an overview of evolutionary robotics techniques applied to online distributed evolution for robot collectives, namely, embodied evolution. It provides a definition of embodied evolution as well as a thorough description of the underlying concepts and mechanisms. This article also presents a comprehensive summary of research published in the field since its inception around the year 2000, providing various perspectives to identify the major trends. In particular, we identify a shift from considering embodied evolution as a parallel search method within small robot collectives (fewer than 10 robots to embodied evolution as an online distributed learning method for designing collective behaviors in swarm-like collectives. This article concludes with a discussion of applications and open questions, providing a milestone for past and an inspiration for future research.

  14. Influence of equilibrium shear flow in the parallel magnetic direction on edge localized mode crash

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y.; Xiong, Y. Y. [College of Physical Science and Technology, Sichuan University, 610064 Chengdu (China); Chen, S. Y., E-mail: sychen531@163.com [College of Physical Science and Technology, Sichuan University, 610064 Chengdu (China); Key Laboratory of High Energy Density Physics and Technology of Ministry of Education, Sichuan University, Chengdu 610064 (China); Southwestern Institute of Physics, Chengdu 610041 (China); Huang, J.; Tang, C. J. [College of Physical Science and Technology, Sichuan University, 610064 Chengdu (China); Key Laboratory of High Energy Density Physics and Technology of Ministry of Education, Sichuan University, Chengdu 610064 (China)

    2016-04-15

    The influence of the parallel shear flow on the evolution of peeling-ballooning (P-B) modes is studied with the BOUT++ four-field code in this paper. The parallel shear flow has different effects in linear simulation and nonlinear simulation. In the linear simulations, the growth rate of edge localized mode (ELM) can be increased by Kelvin-Helmholtz term, which can be caused by the parallel shear flow. In the nonlinear simulations, the results accord with the linear simulations in the linear phase. However, the ELM size is reduced by the parallel shear flow in the beginning of the turbulence phase, which is recognized as the P-B filaments' structure. Then during the turbulence phase, the ELM size is decreased by the shear flow.

  15. Cloud computing task scheduling strategy based on differential evolution and ant colony optimization

    Science.gov (United States)

    Ge, Junwei; Cai, Yu; Fang, Yiqiu

    2018-05-01

    This paper proposes a task scheduling strategy DEACO based on the combination of Differential Evolution (DE) and Ant Colony Optimization (ACO), aiming at the single problem of optimization objective in cloud computing task scheduling, this paper combines the shortest task completion time, cost and load balancing. DEACO uses the solution of the DE to initialize the initial pheromone of ACO, reduces the time of collecting the pheromone in ACO in the early, and improves the pheromone updating rule through the load factor. The proposed algorithm is simulated on cloudsim, and compared with the min-min and ACO. The experimental results show that DEACO is more superior in terms of time, cost, and load.

  16. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  17. Evaluating the performance of the particle finite element method in parallel architectures

    Science.gov (United States)

    Gimenez, Juan M.; Nigro, Norberto M.; Idelsohn, Sergio R.

    2014-05-01

    This paper presents a high performance implementation for the particle-mesh based method called particle finite element method two (PFEM-2). It consists of a material derivative based formulation of the equations with a hybrid spatial discretization which uses an Eulerian mesh and Lagrangian particles. The main aim of PFEM-2 is to solve transport equations as fast as possible keeping some level of accuracy. The method was found to be competitive with classical Eulerian alternatives for these targets, even in their range of optimal application. To evaluate the goodness of the method with large simulations, it is imperative to use of parallel environments. Parallel strategies for Finite Element Method have been widely studied and many libraries can be used to solve Eulerian stages of PFEM-2. However, Lagrangian stages, such as streamline integration, must be developed considering the parallel strategy selected. The main drawback of PFEM-2 is the large amount of memory needed, which limits its application to large problems with only one computer. Therefore, a distributed-memory implementation is urgently needed. Unlike a shared-memory approach, using domain decomposition the memory is automatically isolated, thus avoiding race conditions; however new issues appear due to data distribution over the processes. Thus, a domain decomposition strategy for both particle and mesh is adopted, which minimizes the communication between processes. Finally, performance analysis running over multicore and multinode architectures are presented. The Courant-Friedrichs-Lewy number used influences the efficiency of the parallelization and, in some cases, a weighted partitioning can be used to improve the speed-up. However the total cputime for cases presented is lower than that obtained when using classical Eulerian strategies.

  18. Evolution of the heteroharmonic strategy for target-range computation in the echolocation of Mormoopidae.

    Directory of Open Access Journals (Sweden)

    Emanuel C Mora

    2013-06-01

    Full Text Available Echolocating bats use the time elapsed from biosonar pulse emission to the arrival of echo (defined as echo-delay to assess target-distance. Target-distance is represented in the brain by delay-tuned neurons that are classified as either heteroharmonic or homoharmormic. Heteroharmonic neurons respond more strongly to pulse-echo pairs in which the timing of the pulse is given by the fundamental biosonar harmonic while the timing of echoes is provided by one (or several of the higher order harmonics. On the other hand, homoharmonic neurons are tuned to the echo delay between similar harmonics in the emitted pulse and echo. It is generally accepted that heteroharmonic computations are advantageous over homoharmonic computations; i.e. heteroharmonic neurons receive information from call and echo in different frequency-bands which helps to avoid jamming between pulse and echo signals. Heteroharmonic neurons have been found in two species of the family Mormoopidae (Pteronotus parnellii and Pteronotus quadridens and in Rhinolophus rouxi. Recently, it was proposed that heteroharmonic target-range computations are a primitive feature of the genus Pteronotus that was preserved in the evolution of the genus. Here we review recent findings on the evolution of echolocation in Mormoopidae, and try to link those findings to the evolution of the heteroharmonic computation strategy. We stress the hypothesis that the ability to perform heteroharmonic computations evolved separately from the ability of using long constant-frequency echolocation calls, high duty cycle echolocation and Doppler Shift Compensation. Also, we present the idea that heteroharmonic computations might have been of advantage for categorizing prey size, hunting eared insects and living in large conspecific colonies. We make five testable predictions that might help future investigations to clarify the evolution of the heteroharmonic echolocation in Mormoopidae and other families.

  19. Monte Carlo calculations on a parallel computer using MORSE-C.G

    International Nuclear Information System (INIS)

    Wood, J.

    1995-01-01

    The general purpose particle transport Monte Carlo code, MORSE-C.G., is implemented on a parallel computing transputer-based system having MIMD architecture. Example problems are solved which are representative of the 3-principal types of problem that can be solved by the original serial code, namely, fixed source, eigenvalue (k-eff) and time-dependent. The results from the parallelized version of the code are compared in tables with the serial code run on a mainframe serial computer, and with an independent, deterministic transport code. The performance of the parallel computer as the number of processors is varied is shown graphically. For the parallel strategy used, the loss of efficiency as the number of processors is increased, is investigated. (author)

  20. An Introduction to Parallelism, Concurrency and Acceleration (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Concurrency and parallelism are firm elements of any modern computing infrastructure, made even more prominent by the emergence of accelerators. These lectures offer an introduction to these important concepts. We will begin with a brief refresher of recent hardware offerings to modern-day programmers. We will then open the main discussion with an overview of the laws and practical aspects of scalability. Key parallelism data structures, patterns and algorithms will be shown. The main threats to scalability and mitigation strategies will be discussed in the context of real-life optimization problems.

  1. Darwinism Extended - A survey of how the idea of cultural evolution evolved

    NARCIS (Netherlands)

    Buskes, C.J.J.

    2013-01-01

    In the past 150 years there have been many attempts to draw parallels between cultural and biological evolution. Most of these attempts were flawed due to lack of knowledge and false ideas about evolution. In recent decades these shortcomings have been cleared away, thus triggering a renewed

  2. Darwinism Extended - A Survey of How the Idea of Cultural Evolution Evolved

    NARCIS (Netherlands)

    Buskes, C.J.J.

    2013-01-01

    In the past 150 years there have been many attempts to draw parallels between cultural and biological evolution. Most of these attempts were flawed due to lack of knowledge and false ideas about evolution. In recent decades these shortcomings have been cleared away, thus triggering a renewed

  3. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    Science.gov (United States)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  4. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    Science.gov (United States)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  5. Parallel and distributed processing in two SGBDS: A case study

    OpenAIRE

    Francisco Javier Moreno; Nataly Castrillón Charari; Camilo Taborda Zuluaga

    2017-01-01

    Context: One of the strategies for managing large volumes of data is distributed and parallel computing. Among the tools that allow applying these characteristics are some Data Base Management Systems (DBMS), such as Oracle, DB2, and SQL Server. Method: In this paper we present a case study where we evaluate the performance of an SQL query in two of these DBMS. The evaluation is done through various forms of data distribution in a computer network with different degrees of parallelism. ...

  6. Parallel Breadth-First Search on Distributed Memory Systems

    Energy Technology Data Exchange (ETDEWEB)

    Computational Research Division; Buluc, Aydin; Madduri, Kamesh

    2011-04-15

    Data-intensive, graph-based computations are pervasive in several scientific applications, and are known to to be quite challenging to implement on distributed memory systems. In this work, we explore the design space of parallel algorithms for Breadth-First Search (BFS), a key subroutine in several graph algorithms. We present two highly-tuned par- allel approaches for BFS on large parallel systems: a level-synchronous strategy that relies on a simple vertex-based partitioning of the graph, and a two-dimensional sparse matrix- partitioning-based approach that mitigates parallel commu- nication overhead. For both approaches, we also present hybrid versions with intra-node multithreading. Our novel hybrid two-dimensional algorithm reduces communication times by up to a factor of 3.5, relative to a common vertex based approach. Our experimental study identifies execu- tion regimes in which these approaches will be competitive, and we demonstrate extremely high performance on lead- ing distributed-memory parallel systems. For instance, for a 40,000-core parallel execution on Hopper, an AMD Magny- Cours based system, we achieve a BFS performance rate of 17.8 billion edge visits per second on an undirected graph of 4.3 billion vertices and 68.7 billion edges with skewed degree distribution.

  7. Parallel community climate model: Description and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  8. Design and implementation of parallel video encoding strategies using divisible load analysis

    NARCIS (Netherlands)

    Li, Ping; Veeravalli, Bharadwaj; Kassim, A.A.

    2005-01-01

    The processing time needed for motion estimation usually accounts for a significant part of the overall processing time of the video encoder. To improve the video encoding speed, reducing the execution time for motion estimation process is essential. Parallel implementation of video encoding systems

  9. A PARALLEL MONTE CARLO CODE FOR SIMULATING COLLISIONAL N-BODY SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A., E-mail: bharath@u.northwestern.edu [Center for Interdisciplinary Exploration and Research in Astrophysics, Northwestern University, Evanston, IL (United States)

    2013-02-15

    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N {approx} 10{sup 7} particles. Our code is based on the Henon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 10{sup 5} to 10{sup 7}. We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within {approx}< 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N = 10{sup 5}, 128 for N = 10{sup 6} and 256 for N = 10{sup 7}. The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60 Multiplication-Sign , 100 Multiplication-Sign , and 220 Multiplication-Sign , respectively.

  10. An Extended Flexible Job Shop Scheduling Model for Flight Deck Scheduling with Priority, Parallel Operations, and Sequence Flexibility

    Directory of Open Access Journals (Sweden)

    Lianfei Yu

    2017-01-01

    Full Text Available Efficient scheduling for the supporting operations of aircrafts in flight deck is critical to the aircraft carrier, and even several seconds’ improvement may lead to totally converse outcome of a battle. In the paper, we ameliorate the supporting operations of carrier-based aircrafts and investigate three simultaneous operation relationships during the supporting process, including precedence constraints, parallel operations, and sequence flexibility. Furthermore, multifunctional aircrafts have to take off synergistically and participate in a combat cooperatively. However, their takeoff order must be restrictively prioritized during the scheduling period accorded by certain operational regulations. To efficiently prioritize the takeoff order while minimizing the total time budget on the whole takeoff duration, we propose a novel mixed integer liner programming formulation (MILP for the flight deck scheduling problem. Motivated by the hardness of MILP, we design an improved differential evolution algorithm combined with typical local search strategies to improve computational efficiency. We numerically compare the performance of our algorithm with the classical genetic algorithm and normal differential evolution algorithm and the results show that our algorithm obtains better scheduling schemes that can meet both the operational relations and the takeoff priority requirements.

  11. Modern spandrels: the roles of genetic drift, gene flow and natural selection in the evolution of parallel clines.

    Science.gov (United States)

    Santangelo, James S; Johnson, Marc T J; Ness, Rob W

    2018-05-16

    Urban environments offer the opportunity to study the role of adaptive and non-adaptive evolutionary processes on an unprecedented scale. While the presence of parallel clines in heritable phenotypic traits is often considered strong evidence for the role of natural selection, non-adaptive evolutionary processes can also generate clines, and this may be more likely when traits have a non-additive genetic basis due to epistasis. In this paper, we use spatially explicit simulations modelled according to the cyanogenesis (hydrogen cyanide, HCN) polymorphism in white clover ( Trifolium repens ) to examine the formation of phenotypic clines along urbanization gradients under varying levels of drift, gene flow and selection. HCN results from an epistatic interaction between two Mendelian-inherited loci. Our results demonstrate that the genetic architecture of this trait makes natural populations susceptible to decreases in HCN frequencies via drift. Gradients in the strength of drift across a landscape resulted in phenotypic clines with lower frequencies of HCN in strongly drifting populations, giving the misleading appearance of deterministic adaptive changes in the phenotype. Studies of heritable phenotypic change in urban populations should generate null models of phenotypic evolution based on the genetic architecture underlying focal traits prior to invoking selection's role in generating adaptive differentiation. © 2018 The Author(s).

  12. High-speed detection of emergent market clustering via an unsupervised parallel genetic algorithm

    Directory of Open Access Journals (Sweden)

    Dieter Hendricks

    2016-02-01

    Full Text Available We implement a master-slave parallel genetic algorithm with a bespoke log-likelihood fitness function to identify emergent clusters within price evolutions. We use graphics processing units (GPUs to implement a parallel genetic algorithm and visualise the results using disjoint minimal spanning trees. We demonstrate that our GPU parallel genetic algorithm, implemented on a commercially available general purpose GPU, is able to recover stock clusters in sub-second speed, based on a subset of stocks in the South African market. This approach represents a pragmatic choice for low-cost, scalable parallel computing and is significantly faster than a prototype serial implementation in an optimised C-based fourth-generation programming language, although the results are not directly comparable because of compiler differences. Combined with fast online intraday correlation matrix estimation from high frequency data for cluster identification, the proposed implementation offers cost-effective, near-real-time risk assessment for financial practitioners.

  13. Parallel adaptation of a vectorised quantumchemical program system

    International Nuclear Information System (INIS)

    Van Corler, L.C.H.; Van Lenthe, J.H.

    1987-01-01

    Supercomputers, like the CRAY 1 or the Cyber 205, have had, and still have, a marked influence on Quantum Chemistry. Vectorization has led to a considerable increase in the performance of Quantum Chemistry programs. However, clockcycle times more than a factor 10 smaller than those of the present supercomputers are not to be expected. Therefore future supercomputers will have to depend on parallel structures. Recently, the first examples of such supercomputers have been installed. To be prepared for this new generation of (parallel) supercomputers one should consider the concepts one wants to use and the kind of problems one will encounter during implementation of existing vectorized programs on those parallel systems. The authors implemented four important parts of a large quantumchemical program system (ATMOL), i.e. integrals, SCF, 4-index and Direct-CI in the parallel environment at ECSEC (Rome, Italy). This system offers simulated parallellism on the host computer (IBM 4381) and real parallellism on at most 10 attached processors (FPS-164). Quantumchemical programs usually handle large amounts of data and very large, often sparse matrices. The transfer of that many data can cause problems concerning communication and overhead, in view of which shared memory and shared disks must be considered. The strategy and the tools that were used to parallellise the programs are shown. Also, some examples are presented to illustrate effectiveness and performance of the system in Rome for these type of calculations

  14. Evolution of calculation methods taking into account severe accidents

    International Nuclear Information System (INIS)

    L'Homme, A.; Courtaud, J.M.

    1990-12-01

    During the first decade of PWRs operation in France the calculation methods used for design and operation have improved very much. This paper gives a general analysis of the calculation methods evolution in parallel with the evolution of safety approach concerning PWRs. Then a comprehensive presentation of principal calculation tools is presented as applied during the past decade. An effort is done to predict the improvements in near future

  15. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    Science.gov (United States)

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  16. Efficient Parallel Strategy Improvement for Parity Games

    OpenAIRE

    Fearnley, John

    2017-01-01

    We study strategy improvement algorithms for solving parity games. While these algorithms are known to solve parity games using a very small number of iterations, experimental studies have found that a high step complexity causes them to perform poorly in practice. In this paper we seek to address this situation. Every iteration of the algorithm must compute a best response, and while the standard way of doing this uses the Bellman-Ford algorithm, we give experimental results that show that o...

  17. Parallel dispatch: a new paradigm of electrical power system dispatch

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jun Jason; Wang, Fei-Yue; Wang, Qiang; Hao, Dazhi; Yang, Xiaojing; Gao, David Wenzhong; Zhao, Xiangyang; Zhang, Yingchen

    2018-01-01

    Modern power systems are evolving into sociotechnical systems with massive complexity, whose real-time operation and dispatch go beyond human capability. Thus, the need for developing and applying new intelligent power system dispatch tools are of great practical significance. In this paper, we introduce the overall business model of power system dispatch, the top level design approach of an intelligent dispatch system, and the parallel intelligent technology with its dispatch applications. We expect that a new dispatch paradigm, namely the parallel dispatch, can be established by incorporating various intelligent technologies, especially the parallel intelligent technology, to enable secure operation of complex power grids, extend system operators U+02BC capabilities, suggest optimal dispatch strategies, and to provide decision-making recommendations according to power system operational goals.

  18. Cache-aware data structure model for parallelism and dynamic load balancing

    International Nuclear Information System (INIS)

    Sridi, Marwa

    2016-01-01

    This PhD thesis is dedicated to the implementation of innovative parallel methods in the framework of fast transient fluid-structure dynamics. It improves existing methods within EUROPLEXUS software, in order to optimize the shared memory parallel strategy, complementary to the original distributed memory approach, brought together into a global hybrid strategy for clusters of multi-core nodes. Starting from a sound analysis of the state of the art concerning data structuring techniques correlated to the hierarchic memory organization of current multi-processor architectures, the proposed work introduces an approach suitable for an explicit time integration (i.e. with no linear system to solve at each step). A data structure of type 'Structure of arrays' is conserved for the global data storage, providing flexibility and efficiency for current operations on kinematics fields (displacement, velocity and acceleration). On the contrary, in the particular case of elementary operations (for internal forces generic computations, as well as fluxes computations between cell faces for fluid models), particularly time consuming but localized in the program, a temporary data structure of type 'Array of structures' is used instead, to force an efficient filling of the cache memory and increase the performance of the resolution, for both serial and shared memory parallel processing. Switching from the global structure to the temporary one is based on a cell grouping strategy, following classing cache-blocking principles but handling specifically for this work neighboring data necessary to the efficient treatment of ALE fluxes for cells on the group boundaries. The proposed approach is extensively tested, from the point of views of both the computation time and the access failures into cache memory, confronting the gains obtained within the elementary operations to the potential overhead generated by the data structure switch. Obtained results are very satisfactory, especially

  19. The R package "sperrorest" : Parallelized spatial error estimation and variable importance assessment for geospatial machine learning

    Science.gov (United States)

    Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander

    2017-04-01

    Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the

  20. Evolution of the carabid ground beetles.

    Science.gov (United States)

    Osawa, S; Su, Z H; Kim, C G; Okamoto, M; Tominaga, O; Imura, Y

    1999-01-01

    The phylogenetic relationships of the carabid ground beetles have been estimated by analysing a large part of the ND5 gene sequences of more than 1,000 specimens consisting of the representative species and geographic races covering most of the genera and subgenera known in the world. From the phylogenetic analyses in conjunction with the mtDNA-based dating, a scenario of the establishment of the present habitats of the respective Japanese carabids has been constructed. The carabid diversification took place ca. 40 MYA as an explosive radiation of the major genera. During evolution, occasional small or single bangs also took place, sometimes accompanied by parallel morphological evolution in phylogenetically remote as well as close lineages. The existence of silent periods, in which few morphological changes took place, has been recognized during evolution. Thus, the carabid evolution is discontinuous, alternatively having a phase of rapid morphological change and a silent phase.

  1. Parallel particle swarm optimization algorithm in nuclear problems

    International Nuclear Information System (INIS)

    Waintraub, Marcel; Pereira, Claudio M.N.A.; Schirru, Roberto

    2009-01-01

    Particle Swarm Optimization (PSO) is a population-based metaheuristic (PBM), in which solution candidates evolve through simulation of a simplified social adaptation model. Putting together robustness, efficiency and simplicity, PSO has gained great popularity. Many successful applications of PSO are reported, in which PSO demonstrated to have advantages over other well-established PBM. However, computational costs are still a great constraint for PSO, as well as for all other PBMs, especially in optimization problems with time consuming objective functions. To overcome such difficulty, parallel computation has been used. The default advantage of parallel PSO (PPSO) is the reduction of computational time. Master-slave approaches, exploring this characteristic are the most investigated. However, much more should be expected. It is known that PSO may be improved by more elaborated neighborhood topologies. Hence, in this work, we develop several different PPSO algorithms exploring the advantages of enhanced neighborhood topologies implemented by communication strategies in multiprocessor architectures. The proposed PPSOs have been applied to two complex and time consuming nuclear engineering problems: reactor core design and fuel reload optimization. After exhaustive experiments, it has been concluded that: PPSO still improves solutions after many thousands of iterations, making prohibitive the efficient use of serial (non-parallel) PSO in such kind of realworld problems; and PPSO with more elaborated communication strategies demonstrated to be more efficient and robust than the master-slave model. Advantages and peculiarities of each model are carefully discussed in this work. (author)

  2. Fluorous Parallel Synthesis of A Hydantoin/Thiohydantoin Library

    Science.gov (United States)

    Lu, Yimin; Zhang, Wei

    2007-01-01

    Fluorous tagging strategy is applied to solution-phase parallel synthesis of a library containing hydantoin and thiohydantoin analogs. Two perfluoroalkyl (Rf)-tagged α-amino esters each react with 6 aromatic aldehydes under reductive amination conditions. Twelve amino esters then each react with 10 isocyanates and isothiocyanates in parallel. The resulting 120 ureas and thioureas undergo spontaneous cyclization to form the corresponding hydantoins and thiohydantoins. The intermediate and final product purifications are performed with solid-phase extraction (SPE) over FluoroFlash™ cartridges, no chromatography is required. Using standard instruments and straightforward SPE technique, one chemist accomplished the 120-member library synthesis in less than 5 working days, including starting material synthesis and product analysis. PMID:15789556

  3. Spectral analysis of parallel incomplete factorizations with implicit pseudo­-overlap

    NARCIS (Netherlands)

    Magolu monga Made, Mardochée; Vorst, H.A. van der

    2000-01-01

    Two general parallel incomplete factorization strategies are investigated. The techniques may be interpreted as generalized domain decomposition methods. In contrast to classical domain decomposition methods, adjacent subdomains exchange data during the construction of the incomplete

  4. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  5. Current status of the ParInt package for parallel multivariate integration

    International Nuclear Information System (INIS)

    Doncker, E. de; Kaugars, K.; Cucos, L.; Zanny, R.

    2002-01-01

    The ParInt project focuses on the development of parallel algorithms and software for the computation of multi-variate integrals. We will give an overview of the contents and capabilities of the package. Our objective has been to provide the end-user with state of the art problem solving power. This has required work in a number of areas, including the fundamental numerical techniques, strategies for parallelization, user interfaces for general use and specific applications, and visualization of computations to analyze the mutual influences of problem characteristics and algorithm behavior. Furthermore, the integration of all the above into a versatile set of tools is aimed toward an efficient use of the available parallel or distributed computer resources. (author)

  6. A parallel domain decomposition-based implicit method for the Cahn-Hilliard-Cook phase-field equation in 3D

    KAUST Repository

    Zheng, Xiang

    2015-03-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors. © 2015 Elsevier Inc.

  7. A parallel domain decomposition-based implicit method for the Cahn-Hilliard-Cook phase-field equation in 3D

    Science.gov (United States)

    Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David

    2015-03-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.

  8. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    International Nuclear Information System (INIS)

    Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David

    2015-01-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors

  9. Geometric phases for mixed states during cyclic evolutions

    International Nuclear Information System (INIS)

    Fu Libin; Chen Jingling

    2004-01-01

    The geometric phases of cyclic evolutions for mixed states are discussed in the framework of unitary evolution. A canonical 1-form is defined whose line integral gives the geometric phase, which is gauge invariant. It reduces to the Aharonov and Anandan phase in the pure state case. Our definition is consistent with the phase shift in the proposed experiment (Sjoeqvist et al 2000 Phys. Rev. Lett. 85 2845) for a cyclic evolution if the unitary transformation satisfies the parallel transport condition. A comprehensive geometric interpretation is also given. It shows that the geometric phases for mixed states share the same geometric sense with the pure states

  10. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  11. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  12. Research on parallel algorithm for sequential pattern mining

    Science.gov (United States)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  13. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  14. Parallel PDE-Based Simulations Using the Common Component Architecture

    International Nuclear Information System (INIS)

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-01-01

    The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications

  15. Directed evolution strategies for enantiocomplementary haloalkane dehalogenases: from chemical waste to enantiopure building blocks.

    Science.gov (United States)

    van Leeuwen, Jan G E; Wijma, Hein J; Floor, Robert J; van der Laan, Jan-Metske; Janssen, Dick B

    2012-01-02

    We used directed evolution to obtain enantiocomplementary haloalkane dehalogenase variants that convert the toxic waste compound 1,2,3-trichloropropane (TCP) into highly enantioenriched (R)- or (S)-2,3-dichloropropan-1-ol, which can easily be converted into optically active epichlorohydrins-attractive intermediates for the synthesis of enantiopure fine chemicals. A dehalogenase with improved catalytic activity but very low enantioselectivity was used as the starting point. A strategy that made optimal use of the limited capacity of the screening assay, which was based on chiral gas chromatography, was developed. We used pair-wise site-saturation mutagenesis (SSM) of all 16 noncatalytic active-site residues during the initial two rounds of evolution. The resulting best R- and S-enantioselective variants were further improved in two rounds of site-restricted mutagenesis (SRM), with incorporation of carefully selected sets of amino acids at a larger number of positions, including sites that are more distant from the active site. Finally, the most promising mutations and positions were promoted to a combinatorial library by using a multi-site mutagenesis protocol with restricted codon sets. To guide the design of partly undefined (ambiguous) codon sets for these restricted libraries we employed structural information, the results of multiple sequence alignments, and knowledge from earlier rounds. After five rounds of evolution with screening of only 5500 clones, we obtained two strongly diverged haloalkane dehalogenase variants that give access to (R)-epichlorohydrin with 90 % ee and to (S)-epichlorohydrin with 97 % ee, containing 13 and 17 mutations, respectively, around their active sites. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Surface spintronics enhanced photo-catalytic hydrogen evolution: Mechanisms, strategies, challenges and future

    Science.gov (United States)

    Zhang, Wenyan; Gao, Wei; Zhang, Xuqiang; Li, Zhen; Lu, Gongxuan

    2018-03-01

    Hydrogen is a green energy carrier with high enthalpy and zero environmental pollution emission characteristics. Photocatalytic hydrogen evolution (HER) is a sustainable and promising way to generate hydrogen. Despite of great achievements in photocatalytic HER research, its efficiency is still limited due to undesirable electron transfer loss, high HER over-potential and low stability of some photocatalysts, which lead to their unsatisfied performance in HER and anti-photocorrosion properties. In recent years, many spintronics works have shown their enhancing effects on photo-catalytic HER. For example, it was reported that spin polarized photo-electrons could result in higher photocurrents and HER turn-over frequency (up to 200%) in photocatalytic system. Two strategies have been developed for electron spin polarizing, which resort to heavy atom effect and magnetic induction respectively. Both theoretical and experimental studies show that controlling spin state of OHrad radicals in photocatalytic reaction can not only decrease OER over-potential (even to 0 eV) of water splitting, but improve stability and charge lifetime of photocatalysts. A convenient strategy have been developed for aligning spin state of OHrad by utilizing chiral molecules to spin filter photo-electrons. By chiral-induced spin filtering, electron polarization can approach to 74%, which is significantly larger than some traditional transition metal devices. Those achievements demonstrate bright future of spintronics in enhancing photocatalytic HER, nevertheless, there is little work systematically reviewing and analysis this topic. This review focuses on recent achievements of spintronics in photocatalytic HER study, and systematically summarizes the related mechanisms and important strategies proposed. Besides, the challenges and developing trends of spintronics enhanced photo-catalytic HER research are discussed, expecting to comprehend and explore such interdisciplinary research in

  17. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  18. Decomposition based parallel processing technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2000-01-01

    In practical design studies, most of designers solve multidisciplinary problems with complex design structure. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder original design processes to minimize total cost and time. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  19. A tool for simulating parallel branch-and-bound methods

    Science.gov (United States)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  20. Engineering-Based Thermal CFD Simulations on Massive Parallel Systems

    KAUST Repository

    Frisch, Jérôme

    2015-05-22

    The development of parallel Computational Fluid Dynamics (CFD) codes is a challenging task that entails efficient parallelization concepts and strategies in order to achieve good scalability values when running those codes on modern supercomputers with several thousands to millions of cores. In this paper, we present a hierarchical data structure for massive parallel computations that supports the coupling of a Navier–Stokes-based fluid flow code with the Boussinesq approximation in order to address complex thermal scenarios for energy-related assessments. The newly designed data structure is specifically designed with the idea of interactive data exploration and visualization during runtime of the simulation code; a major shortcoming of traditional high-performance computing (HPC) simulation codes. We further show and discuss speed-up values obtained on one of Germany’s top-ranked supercomputers with up to 140,000 processes and present simulation results for different engineering-based thermal problems.

  1. Parallel Genetic and Phenotypic Evolution of DNA Superhelicity in Experimental Populations of Escherichia coli

    DEFF Research Database (Denmark)

    Crozat, Estelle; Winkworth, Cynthia; Gaffé, Joël

    2010-01-01

    , indicate that changes in DNA superhelicity have been important in the evolution of these populations. Surprisingly, however, most of the evolved alleles we tested had either no detectable or slightly deleterious effects on fitness, despite these signatures of positive selection.......DNA supercoiling is the master function that interconnects chromosome structure and global gene transcription. This function has recently been shown to be under strong selection in Escherichia coli. During the evolution of 12 initially identical populations propagated in a defined environment...

  2. Grasping convergent evolution in syngnathids: a unique tale of tails

    Science.gov (United States)

    Neutens, C; Adriaens, D; Christiaens, J; De Kegel, B; Dierick, M; Boistel, R; Van Hoorebeke, L

    2014-01-01

    Seahorses and pipehorses both possess a prehensile tail, a unique characteristic among teleost fishes, allowing them to grasp and hold onto substrates such as sea grasses. Although studies have focused on tail grasping, the pattern of evolutionary transformations that made this possible is poorly understood. Recent phylogenetic studies show that the prehensile tail evolved independently in different syngnathid lineages, including seahorses, Haliichthys taeniophorus and several types of so-called pipehorses. This study explores the pattern that characterizes this convergent evolution towards a prehensile tail, by comparing the caudal musculoskeletal organization, as well as passive bending capacities in pipefish (representing the ancestral state), pipehorse, seahorse and H. taeniophorus. To study the complex musculoskeletal morphology, histological sectioning, μCT-scanning and phase contrast synchrotron scanning were combined with virtual 3D-reconstructions. Results suggest that the independent evolution towards tail grasping in syngnathids reflects at least two quite different strategies in which the ancestral condition of a heavy plated and rigid system became modified into a highly flexible one. Intermediate skeletal morphologies (between the ancestral condition and seahorses) could be found in the pygmy pipehorses and H. taeniophorus, which are phylogenetically closely affiliated with seahorses. This study suggests that the characteristic parallel myoseptal organization as already described in seahorse (compared with a conical organization in pipefish and pipehorse) may not be a necessity for grasping, but represents an apomorphy for seahorses, as this pattern is not found in other syngnathid species possessing a prehensile tail. One could suggest that the functionality of grasping evolved before the specialized, parallel myoseptal organization seen in seahorses. However, as the grasping system in pipehorses is a totally different one, this cannot be

  3. Partial fourier and parallel MR image reconstruction with integrated gradient nonlinearity correction.

    Science.gov (United States)

    Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A

    2016-06-01

    To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. A Parallel Restoration for Black Start of Microgrids Considering Characteristics of Distributed Generations

    Directory of Open Access Journals (Sweden)

    Jing Wang

    2017-12-01

    Full Text Available The black start capability is vital for microgrids, which can potentially improve the reliability of the power grid. This paper proposes a black start strategy for microgrids based on a parallel restoration strategy. Considering the characteristics of distributed generations (DGs, an evaluation model, which is used to assess the black start capability of DGs, is established by adopting the variation coefficient method. Thus, the DGs with good black start capability, which are selected by a diversity sequence method, are restored first in parallel under the constraints of DGs and network. During the selection process of recovery paths, line weight and node importance degree are proposed under the consideration of the node topological importance and the load importance as well as the backbone network restoration time. Therefore, the whole optimization of the reconstructed network is realized. Finally, the simulation results verify the feasibility and effectiveness of the strategy.

  5. Results of Evolution Supervised by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2010-09-01

    Full Text Available The efficiency of a genetic algorithm is frequently assessed using a series of operators of evolution like crossover operators, mutation operators or other dynamic parameters. The present paper aimed to review the main results of evolution supervised by genetic algorithms used to identify solutions to agricultural and horticultural hard problems and to discuss the results of using a genetic algorithms on structure-activity relationships in terms of behavior of evolution supervised by genetic algorithms. A genetic algorithm had been developed and implemented in order to identify the optimal solution in term of estimation power of a multiple linear regression approach for structure-activity relationships. Three survival and three selection strategies (proportional, deterministic and tournament were investigated in order to identify the best survival-selection strategy able to lead to the model with higher estimation power. The Molecular Descriptors Family for structure characterization of a sample of 206 polychlorinated biphenyls with measured octanol-water partition coefficients was used as case study. Evolution using different selection and survival strategies proved to create populations of genotypes living in the evolution space with different diversity and variability. Under a series of criteria of comparisons these populations proved to be grouped and the groups were showed to be statistically different one to each other. The conclusions about genetic algorithm evolution according to a number of criteria were also highlighted.

  6. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  7. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  8. Mutations in AtPS1 (Arabidopsis thaliana parallel spindle 1 lead to the production of diploid pollen grains.

    Directory of Open Access Journals (Sweden)

    Isabelle d'Erfurth

    2008-11-01

    Full Text Available Polyploidy has had a considerable impact on the evolution of many eukaryotes, especially angiosperms. Indeed, most--if not all-angiosperms have experienced at least one round of polyploidy during the course of their evolution, and many important crop plants are current polyploids. The occurrence of 2n gametes (diplogametes in diploid populations is widely recognised as the major source of polyploid formation. However, limited information is available on the genetic control of diplogamete production. Here, we describe the isolation and characterisation of the first gene, AtPS1 (Arabidopsis thaliana Parallel Spindle 1, implicated in the formation of a high frequency of diplogametes in plants. Atps1 mutants produce diploid male spores, diploid pollen grains, and spontaneous triploid plants in the next generation. Female meiosis is not affected in the mutant. We demonstrated that abnormal spindle orientation at male meiosis II leads to diplogamete formation. Most of the parent's heterozygosity is therefore conserved in the Atps1 diploid gametes, which is a key issue for plant breeding. The AtPS1 protein is conserved throughout the plant kingdom and carries domains suggestive of a regulatory function. The isolation of a gene involved in diplogamete production opens the way for new strategies in plant breeding programmes and progress in evolutionary studies.

  9. Parallelization of a Quantum-Classic Hybrid Model For Nanoscale Semiconductor Devices

    Directory of Open Access Journals (Sweden)

    Oscar Salas

    2011-07-01

    Full Text Available The expensive reengineering of the sequential software and the difficult parallel programming are two of the many technical and economic obstacles to the wide use of HPC. We investigate the chance to improve in a rapid way the performance of a numerical serial code for the simulation of the transport of a charged carriers in a Double-Gate MOSFET. We introduce the Drift-Diffusion-Schrödinger-Poisson (DDSP model and we study a rapid parallelization strategy of the numerical procedure on shared memory architectures.

  10. Decoupled Sliding Mode Control for a Novel 3-DOF Parallel Manipulator with Actuation Redundancy

    Directory of Open Access Journals (Sweden)

    Niu Xuemei

    2015-05-01

    Full Text Available This paper presents a decoupled nonsingular terminal sliding mode controller (DNTSMC for a novel 3-DOF parallel manipulator with actuation redundancy. According to kinematic analysis, the inverse dynamic model for a novel 3-DOF redundantly actuated parallel manipulator is formulated in the task space using Lagrangian formalism and decoupled into three entirely independent subsystems under generalized coordinates to significantly reduce system complexity. Based on the dynamic model, a decoupled sliding mode control strategy is proposed for the parallel manipulator; the idea behind this strategy is to design a nonsingular terminal sliding mode controller for each subsystem, which can drive states of three subsystems to the original equilibrium points simultaneously by two intermediate variables. Additionally, a RBF neural network is used to compensate the cross-coupling force and gravity to enhance the control precision. Simulation and experimental results show that the proposed DNTSMC can achieve better control performances compared with the conventional sliding mode controller (SMC and the DNTSMC without compensator.

  11. Fast electrostatic force calculation on parallel computer clusters

    International Nuclear Information System (INIS)

    Kia, Amirali; Kim, Daejoong; Darve, Eric

    2008-01-01

    The fast multipole method (FMM) and smooth particle mesh Ewald (SPME) are well known fast algorithms to evaluate long range electrostatic interactions in molecular dynamics and other fields. FMM is a multi-scale method which reduces the computation cost by approximating the potential due to a group of particles at a large distance using few multipole functions. This algorithm scales like O(N) for N particles. SPME algorithm is an O(NlnN) method which is based on an interpolation of the Fourier space part of the Ewald sum and evaluating the resulting convolutions using fast Fourier transform (FFT). Those algorithms suffer from relatively poor efficiency on large parallel machines especially for mid-size problems around hundreds of thousands of atoms. A variation of the FMM, called PWA, based on plane wave expansions is presented in this paper. A new parallelization strategy for PWA, which takes advantage of the specific form of this expansion, is described. Its parallel efficiency is compared with SPME through detail time measurements on two different computer clusters

  12. Multi Scale Finite Element Analyses By Using SEM-EBSD Crystallographic Modeling and Parallel Computing

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2005-01-01

    A crystallographic homogenization procedure is introduced to the conventional static-explicit and dynamic-explicit finite element formulation to develop a multi scale - double scale - analysis code to predict the plastic strain induced texture evolution, yield loci and formability of sheet metal. The double-scale structure consists of a crystal aggregation - micro-structure - and a macroscopic elastic plastic continuum. At first, we measure crystal morphologies by using SEM-EBSD apparatus, and define a unit cell of micro structure, which satisfy the periodicity condition in the real scale of polycrystal. Next, this crystallographic homogenization FE code is applied to 3N pure-iron and 'Benchmark' aluminum A6022 polycrystal sheets. It reveals that the initial crystal orientation distribution - the texture - affects very much to a plastic strain induced texture and anisotropic hardening evolutions and sheet deformation. Since, the multi-scale finite element analysis requires a large computation time, a parallel computing technique by using PC cluster is developed for a quick calculation. In this parallelization scheme, a dynamic workload balancing technique is introduced for quick and efficient calculations

  13. Bacteria vs. bacteriophages: parallel evolution of immune arsenals

    Directory of Open Access Journals (Sweden)

    Muhammad Abu Bakr Shabbir

    2016-08-01

    Full Text Available Bacteriophages are the most common entities on earth and represent a constant challenge to bacterial populations. To fend off bacteriophage infection, bacteria evolved immune systems to avert phage adsorption and block invader DNA entry. They developed restriction-modification systems and mechanisms to abort infection and interfere with virion assembly, as well as newly recognized clustered regularly interspaced short palindromic repeats (CRISPR. In response to bacterial immune systems, bacteriophages synchronously evolved resistance mechanisms, such as the anti-CRISPR systems to counterattack bacterial CRISPR-cas systems, in a continuing evolutionary arms race between virus and host. In turn, it is fundamental to the survival of the bacterial cell to evolve a system to combat bacteriophage immune strategies.

  14. Synchronous parallel kinetic Monte Carlo for continuum diffusion-reaction systems

    International Nuclear Information System (INIS)

    Martinez, E.; Marian, J.; Kalos, M.H.; Perlado, J.M.

    2008-01-01

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm is intended as a generalization of the standard n-fold kMC method, and is trivially implemented in parallel architectures. In its present form, the algorithm is not rigorous in the sense that boundary conflicts are ignored. We demonstrate, however, that, in their absence, or if they were correctly accounted for, our algorithm solves the same master equation as the serial method. We test the validity and parallel performance of the method by solving several pure diffusion problems (i.e. with no particle interactions) with known analytical solution. We also study diffusion-reaction systems with known asymptotic behavior and find that, for large systems with interaction radii smaller than the typical diffusion length, boundary conflicts are negligible and do not affect the global kinetic evolution, which is seen to agree with the expected analytical behavior. Our method is a controlled approximation in the sense that the error incurred by ignoring boundary conflicts can be quantified intrinsically, during the course of a simulation, and decreased arbitrarily (controlled) by modifying a few problem-dependent simulation parameters

  15. A tool for simulating parallel branch-and-bound methods

    Directory of Open Access Journals (Sweden)

    Golubeva Yana

    2016-01-01

    Full Text Available The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer’s interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  16. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    Energy Technology Data Exchange (ETDEWEB)

    PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

  17. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    International Nuclear Information System (INIS)

    Plimpton, Steven J.; Seidel, David B.; Pasik, Michael F.; Coats, Rebecca S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER

  18. Time evolution in quantum cosmology

    International Nuclear Information System (INIS)

    Lawrie, Ian D.

    2011-01-01

    A commonly adopted relational account of time evolution in generally covariant systems, and more specifically in quantum cosmology, is argued to be unsatisfactory, insofar as it describes evolution relative to observed readings of a clock that does not exist as a bona fide observable object. A modified strategy is proposed, in which evolution relative to the proper time that elapses along the worldline of a specific observer can be described through the introduction of a ''test clock,'' regarded as internal to, and hence unobservable by, that observer. This strategy is worked out in detail in the case of a homogeneous cosmology, in the context of both a conventional Schroedinger quantization scheme, and a 'polymer' quantization scheme of the kind inspired by loop quantum gravity. Particular attention is given to limitations placed on the observability of time evolution by the requirement that a test clock should contribute only a negligible energy to the Hamiltonian constraint. It is found that suitable compromises are available, in which the clock energy is reasonably small, while Dirac observables are reasonably sharply defined.

  19. Parallel processing based decomposition technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2001-01-01

    In practical design studies, most of designers solve multidisciplinary problems with large sized and complex design system. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder the original design processes to minimize total computational cost. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  20. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  1. Parallel and orthogonal stimulus in ultradiluted neural networks

    International Nuclear Information System (INIS)

    Sobral, G. A. Jr.; Vieira, V. M.; Lyra, M. L.; Silva, C. R. da

    2006-01-01

    Extending a model due to Derrida, Gardner, and Zippelius, we have studied the recognition ability of an extreme and asymmetrically diluted version of the Hopfield model for associative memory by including the effect of a stimulus in the dynamics of the system. We obtain exact results for the dynamic evolution of the average network superposition. The stimulus field was considered as proportional to the overlapping of the state of the system with a particular stimulated pattern. Two situations were analyzed, namely, the external stimulus acting on the initialization pattern (parallel stimulus) and the external stimulus acting on a pattern orthogonal to the initialization one (orthogonal stimulus). In both cases, we obtained the complete phase diagram in the parameter space composed of the stimulus field, thermal noise, and network capacity. Our results show that the system improves its recognition ability for parallel stimulus. For orthogonal stimulus two recognition phases emerge with the system locking at the initialization or stimulated pattern. We confront our analytical results with numerical simulations for the noiseless case T=0

  2. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  3. Costly advertising and the evolution of cooperation.

    Directory of Open Access Journals (Sweden)

    Markus Brede

    Full Text Available In this paper, I investigate the co-evolution of fast and slow strategy spread and game strategies in populations of spatially distributed agents engaged in a one off evolutionary dilemma game. Agents are characterized by a pair of traits, a game strategy (cooperate or defect and a binary 'advertising' strategy (advertise or don't advertise. Advertising, which comes at a cost [Formula: see text], allows investment into faster propagation of the agents' traits to adjacent individuals. Importantly, game strategy and advertising strategy are subject to the same evolutionary mechanism. Via analytical reasoning and numerical simulations I demonstrate that a range of advertising costs exists, such that the prevalence of cooperation is significantly enhanced through co-evolution. Linking costly replication to the success of cooperators exposes a novel co-evolutionary mechanism that might contribute towards a better understanding of the origins of cooperation-supporting heterogeneity in agent populations.

  4. Costly advertising and the evolution of cooperation.

    Science.gov (United States)

    Brede, Markus

    2013-01-01

    In this paper, I investigate the co-evolution of fast and slow strategy spread and game strategies in populations of spatially distributed agents engaged in a one off evolutionary dilemma game. Agents are characterized by a pair of traits, a game strategy (cooperate or defect) and a binary 'advertising' strategy (advertise or don't advertise). Advertising, which comes at a cost [Formula: see text], allows investment into faster propagation of the agents' traits to adjacent individuals. Importantly, game strategy and advertising strategy are subject to the same evolutionary mechanism. Via analytical reasoning and numerical simulations I demonstrate that a range of advertising costs exists, such that the prevalence of cooperation is significantly enhanced through co-evolution. Linking costly replication to the success of cooperators exposes a novel co-evolutionary mechanism that might contribute towards a better understanding of the origins of cooperation-supporting heterogeneity in agent populations.

  5. Costly Advertising and the Evolution of Cooperation

    Science.gov (United States)

    Brede, Markus

    2013-01-01

    In this paper, I investigate the co-evolution of fast and slow strategy spread and game strategies in populations of spatially distributed agents engaged in a one off evolutionary dilemma game. Agents are characterized by a pair of traits, a game strategy (cooperate or defect) and a binary ‘advertising’ strategy (advertise or don’t advertise). Advertising, which comes at a cost , allows investment into faster propagation of the agents’ traits to adjacent individuals. Importantly, game strategy and advertising strategy are subject to the same evolutionary mechanism. Via analytical reasoning and numerical simulations I demonstrate that a range of advertising costs exists, such that the prevalence of cooperation is significantly enhanced through co-evolution. Linking costly replication to the success of cooperators exposes a novel co-evolutionary mechanism that might contribute towards a better understanding of the origins of cooperation-supporting heterogeneity in agent populations. PMID:23861752

  6. Parallel constraint satisfaction in memory-based decisions.

    Science.gov (United States)

    Glöckner, Andreas; Hodges, Sara D

    2011-01-01

    Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.

  7. The Evolution of Enterprise Organization Designs

    Directory of Open Access Journals (Sweden)

    Jay R. Galbraith

    2012-08-01

    Full Text Available This article extends Alfred Chandler's seminal ideas about strategy and organizational structure, and it predicts the next stage of organizational evolution. Chandler described the evolution of vertical integration and diversification strategies for which the functional and multidivisional structures are appropriate. He also explained how the dominant structure at any point in time is a concatenation or accumulation of all previous strategies and structures. I extend Chandler's ideas by describing how early "structures" became "organizations" (people, rewards, management processes, etc. and by discussing the more recent strategies of international expansion and customer focus. International expansion leads to organizations of three dimensions: functions, business units, and countries. Customer-focused strategies lead to four-dimensional organizations currently found in global firms such as IBM, Nike, and Procter & Gamble. I argue that the next major dimension along which organizations will evolve is emerging in firms which are experimenting with the use of "Big Data."

  8. Selective maintenance for multi-state series–parallel systems under economic dependence

    International Nuclear Information System (INIS)

    Dao, Cuong D.; Zuo, Ming J.; Pandey, Mayank

    2014-01-01

    This paper presents a study on selective maintenance for multi-state series–parallel systems with economically dependent components. In the selective maintenance problem, the maintenance manager has to decide which components should receive maintenance activities within a finite break between missions. All the system reliabilities in the next operating mission, the available budget and the maintenance time for each component from its current state to a higher state are taken into account in the optimization models. In addition, the components in series–parallel systems are considered to be economically dependent. Time and cost savings will be achieved when several components are simultaneously repaired in a selective maintenance strategy. As the number of repaired components increases, the saved time and cost will also increase due to the share of setting up between components and another additional reduction amount resulting from the repair of multiple identical components. Different optimization models are derived to find the best maintenance strategy for multi-state series–parallel systems. A genetic algorithm is used to solve the optimization models. The decision makers may select different components to be repaired to different working states based on the maintenance objective, resource availabilities and how dependent the repair time and cost of each component are

  9. Parallel Quasi Newton Algorithms for Large Scale Non Linear Unconstrained Optimization

    International Nuclear Information System (INIS)

    Rahman, M. A.; Basarudin, T.

    1997-01-01

    This paper discusses about Quasi Newton (QN) method to solve non-linear unconstrained minimization problems. One of many important of QN method is choice of matrix Hk. to be positive definite and satisfies to QN method. Our interest here is the parallel QN methods which will suite for the solution of large-scale optimization problems. The QN methods became less attractive in large-scale problems because of the storage and computational requirements. How ever, it is often the case that the Hessian is space matrix. In this paper we include the mechanism of how to reduce the Hessian update and hold the Hessian properties.One major reason of our research is that the QN method may be good in solving certain type of minimization problems, but it is efficiency degenerate when is it applied to solve other category of problems. For this reason, we use an algorithm containing several direction strategies which are processed in parallel. We shall attempt to parallelized algorithm by exploring different search directions which are generated by various QN update during the minimization process. The different line search strategies will be employed simultaneously in the process of locating the minimum along each direction.The code of algorithm will be written in Occam language 2 which is run on the transputer machine

  10. “Word upon a Word”: Parallelism, Meaning, and Emergent Structure in Kalevala-meter Poetry

    Directory of Open Access Journals (Sweden)

    Lotte Tarkka

    2017-10-01

    Full Text Available This essay treats parallelism as a means for articulating and communicating meaning in performance. Rather than a merely stylistic and structural marker, parallelism is discussed as an expressive and cognitive strategy for the elaboration of notions and cognitive categories that are vital in the culture and central for the individual performers. The essay is based on an analysis of short forms of Kalevala-meter poetry from Viena Karelia: proverbs, aphorisms, and lyric poetry. In the complex system of genres using the same poetic meter parallelism transformed genres and contributed to the emergence of cohesive and finalized performances.

  11. Parallel family trees for transfer matrices in the Potts model

    Science.gov (United States)

    Navarro, Cristobal A.; Canfora, Fabrizio; Hitschfeld, Nancy; Navarro, Gonzalo

    2015-02-01

    The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic (q , v) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O(4m) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O(3m) to build the generic (q , v) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O(3m ×4m) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O(4m ×4m) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM), especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster

  12. Extension parallel to the rift zone during segmented fault growth: application to the evolution of the NE Atlantic

    Directory of Open Access Journals (Sweden)

    A. Bubeck

    2017-11-01

    Full Text Available The mechanical interaction of propagating normal faults is known to influence the linkage geometry of first-order faults, and the development of second-order faults and fractures, which transfer displacement within relay zones. Here we use natural examples of growth faults from two active volcanic rift zones (Koa`e, island of Hawai`i, and Krafla, northern Iceland to illustrate the importance of horizontal-plane extension (heave gradients, and associated vertical axis rotations, in evolving continental rift systems. Second-order extension and extensional-shear faults within the relay zones variably resolve components of regional extension, and components of extension and/or shortening parallel to the rift zone, to accommodate the inherently three-dimensional (3-D strains associated with relay zone development and rotation. Such a configuration involves volume increase, which is accommodated at the surface by open fractures; in the subsurface this may be accommodated by veins or dikes oriented obliquely and normal to the rift axis. To consider the scalability of the effects of relay zone rotations, we compare the geometry and kinematics of fault and fracture sets in the Koa`e and Krafla rift zones with data from exhumed contemporaneous fault and dike systems developed within a > 5×104 km2 relay system that developed during formation of the NE Atlantic margins. Based on the findings presented here we propose a new conceptual model for the evolution of segmented continental rift basins on the NE Atlantic margins.

  13. Parallel processing and non-uniform grids in global air quality modeling

    NARCIS (Netherlands)

    Berkvens, P.J.F.; Bochev, Mikhail A.

    2002-01-01

    A large-scale global air quality model, running efficiently on a single vector processor, is enhanced to make more realistic and more long-term simulations feasible. Two strategies are combined: non-uniform grids and parallel processing. The communication through the hierarchy of non-uniform grids

  14. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    Science.gov (United States)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-05-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  15. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    Science.gov (United States)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-01-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  16. Vacuum Large Current Parallel Transfer Numerical Analysis

    Directory of Open Access Journals (Sweden)

    Enyuan Dong

    2014-01-01

    Full Text Available The stable operation and reliable breaking of large generator current are a difficult problem in power system. It can be solved successfully by the parallel interrupters and proper timing sequence with phase-control technology, in which the strategy of breaker’s control is decided by the time of both the first-opening phase and second-opening phase. The precise transfer current’s model can provide the proper timing sequence to break the generator circuit breaker. By analysis of the transfer current’s experiments and data, the real vacuum arc resistance and precise correctional model in the large transfer current’s process are obtained in this paper. The transfer time calculated by the correctional model of transfer current is very close to the actual transfer time. It can provide guidance for planning proper timing sequence and breaking the vacuum generator circuit breaker with the parallel interrupters.

  17. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  18. Population genomic scans suggest novel genes underlie convergent flowering time evolution in the introduced range of Arabidopsis thaliana.

    Science.gov (United States)

    Gould, Billie A; Stinchcombe, John R

    2017-01-01

    A long-standing question in evolutionary biology is whether the evolution of convergent phenotypes results from selection on the same heritable genetic components. Using whole-genome sequencing and genome scans, we tested whether the evolution of parallel longitudinal flowering time clines in the native and introduced ranges of Arabidopsis thaliana has a similar genetic basis. We found that common variants of large effect on flowering time in the native range do not appear to have been under recent strong selection in the introduced range. We identified a set of 38 new candidate genes that are putatively linked to the evolution of flowering time. A high degree of conditional neutrality of flowering time variants between the native and introduced range may preclude parallel evolution at the level of genes. Overall, neither gene pleiotropy nor available standing genetic variation appears to have restricted the evolution of flowering time to high-frequency variants from the native range or to known flowering time pathway genes. © 2016 John Wiley & Sons Ltd.

  19. Parallel and distributed processing in two SGBDS: A case study

    Directory of Open Access Journals (Sweden)

    Francisco Javier Moreno

    2017-04-01

    Full Text Available Context: One of the strategies for managing large volumes of data is distributed and parallel computing. Among the tools that allow applying these characteristics are some Data Base Management Systems (DBMS, such as Oracle, DB2, and SQL Server. Method: In this paper we present a case study where we evaluate the performance of an SQL query in two of these DBMS. The evaluation is done through various forms of data distribution in a computer network with different degrees of parallelism. Results: The tests of the SQL query evidenced the performance differences between the two DBMS analyzed. However, more thorough testing and a wider variety of queries are needed. Conclusions: The differences in performance between the two DBMSs analyzed show that when evaluating this aspect, it is necessary to consider the particularities of each DBMS and the degree of parallelism of the queries.

  20. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  1. High performance parallel computing of flows in complex geometries: I. Methods

    International Nuclear Information System (INIS)

    Gourdain, N; Gicquel, L; Montagnac, M; Vermorel, O; Staffelbach, G; Garcia, M; Boussuge, J-F; Gazaix, M; Poinsot, T

    2009-01-01

    Efficient numerical tools coupled with high-performance computers, have become a key element of the design process in the fields of energy supply and transportation. However flow phenomena that occur in complex systems such as gas turbines and aircrafts are still not understood mainly because of the models that are needed. In fact, most computational fluid dynamics (CFD) predictions as found today in industry focus on a reduced or simplified version of the real system (such as a periodic sector) and are usually solved with a steady-state assumption. This paper shows how to overcome such barriers and how such a new challenge can be addressed by developing flow solvers running on high-end computing platforms, using thousands of computing cores. Parallel strategies used by modern flow solvers are discussed with particular emphases on mesh-partitioning, load balancing and communication. Two examples are used to illustrate these concepts: a multi-block structured code and an unstructured code. Parallel computing strategies used with both flow solvers are detailed and compared. This comparison indicates that mesh-partitioning and load balancing are more straightforward with unstructured grids than with multi-block structured meshes. However, the mesh-partitioning stage can be challenging for unstructured grids, mainly due to memory limitations of the newly developed massively parallel architectures. Finally, detailed investigations show that the impact of mesh-partitioning on the numerical CFD solutions, due to rounding errors and block splitting, may be of importance and should be accurately addressed before qualifying massively parallel CFD tools for a routine industrial use.

  2. Cosmological evolution of p-brane networks

    International Nuclear Information System (INIS)

    Sousa, L.; Avelino, P. P.

    2011-01-01

    In this paper we derive, directly from the Nambu-Goto action, the relevant components of the acceleration of cosmological featureless p-branes, extending previous analysis based on the field theory equations in the thin-brane limit. The component of the acceleration parallel to the velocity is at the core of the velocity-dependent one-scale model for the evolution of p-brane networks. We use this model to show that, in a decelerating expanding universe in which the p-branes are relevant cosmologically, interactions cannot lead to frustration, except for fine-tuned nonrelativistic networks with a dimensionless curvature parameter k<<1. We discuss the implications of our findings for the cosmological evolution of p-brane networks.

  3. MEvoLib v1.0: the first molecular evolution library for Python.

    Science.gov (United States)

    Álvarez-Jarreta, Jorge; Ruiz-Pesini, Eduardo

    2016-10-28

    Molecular evolution studies involve many different hard computational problems solved, in most cases, with heuristic algorithms that provide a nearly optimal solution. Hence, diverse software tools exist for the different stages involved in a molecular evolution workflow. We present MEvoLib, the first molecular evolution library for Python, providing a framework to work with different tools and methods involved in the common tasks of molecular evolution workflows. In contrast with already existing bioinformatics libraries, MEvoLib is focused on the stages involved in molecular evolution studies, enclosing the set of tools with a common purpose in a single high-level interface with fast access to their frequent parameterizations. The gene clustering from partial or complete sequences has been improved with a new method that integrates accessible external information (e.g. GenBank's features data). Moreover, MEvoLib adjusts the fetching process from NCBI databases to optimize the download bandwidth usage. In addition, it has been implemented using parallelization techniques to cope with even large-case scenarios. MEvoLib is the first library for Python designed to facilitate molecular evolution researches both for expert and novel users. Its unique interface for each common task comprises several tools with their most used parameterizations. It has also included a method to take advantage of biological knowledge to improve the gene partition of sequence datasets. Additionally, its implementation incorporates parallelization techniques to enhance computational costs when handling very large input datasets.

  4. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  5. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  6. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  7. Fusing enacted and expected mimicry generates a winning strategy that promotes the evolution of cooperation.

    Science.gov (United States)

    Fischer, Ilan; Frid, Alex; Goerg, Sebastian J; Levin, Simon A; Rubenstein, Daniel I; Selten, Reinhard

    2013-06-18

    Although cooperation and trust are essential features for the development of prosperous populations, they also put cooperating individuals at risk for exploitation and abuse. Empirical and theoretical evidence suggests that the solution to the problem resides in the practice of mimicry and imitation, the expectation of opponent's mimicry and the reliance on similarity indices. Here we fuse the principles of enacted and expected mimicry and condition their application on two similarity indices to produce a model of mimicry and relative similarity. Testing the model in computer simulations of behavioral niches, populated with agents that enact various strategies and learning algorithms, shows how mimicry and relative similarity outperforms all the opponent strategies it was tested against, pushes noncooperative opponents toward extinction, and promotes the development of cooperative populations. The proposed model sheds light on the evolution of cooperation and provides a blueprint for intentional induction of cooperation within and among populations. It is suggested that reducing conflict intensities among human populations necessitates (i) instigation of social initiatives that increase the perception of similarity among opponents and (ii) efficient lowering of the similarity threshold of the interaction, the minimal level of similarity that makes cooperation advisable.

  8. The Evolution of Exploitation Strategies by Myrmecophiles

    DEFF Research Database (Denmark)

    Schär, Sämi

    than outside ant nests (M. rubra) as well. This fungus can kill ant associated lycaenid larvae, justifying the assumption that these benefit from the entomopathogen poor environment of ant nests. This could explain why natural selection may act in favour of this strategy. In the third chapter I......Myrmecophiles are animals which have evolved to live in the nests of ants. This life history strategy appears in animals as different as insects, spiders, snails, crustaceans and even snakes. Myrmecophiles are very speciose with estimates of up to 100'000 species, which raises the question why...... this strategy has evolved so frequently and is maintained by natural selection. The type of association between Myrmecophiles and ants ranges from mutualistic through to parasitic. These types of symbioses can also be found between and within species of ants. Ant associations can therefore be broadly...

  9. CBCT-guided evolutive library for cervical adaptive IMRT.

    Science.gov (United States)

    Rigaud, Bastien; Simon, Antoine; Gobeli, Maxime; Lafond, Caroline; Leseur, Julie; Barateau, Anais; Jaksic, Nicolas; Castelli, Joël; Williaume, Danièle; Haigron, Pascal; De Crevoisier, Renaud

    2018-04-01

    In the context of adaptive radiation therapy (ART) for locally advanced cervical carcinoma (LACC), this study proposed an original cone-beam computed tomography (CBCT)-guided "Evolutive library" and evaluated it against four other known radiotherapy (RT) strategies. For 20 patients who underwent intensity-modulated radiation therapy (IMRT) for LACC, three planning CTs [with empty (EB), intermediate (IB), and full (FB) bladder volumes], a CT scan at 20 Gy and bi-weekly CBCTs for 5 weeks were performed. Five RT strategies were simulated for each patient: "Standard RT" was based on one IB planning CT; "internal target volume (ITV)-based RT" was an ITV built from the three planning CTs; "RT with one mid-treatment replanning (MidTtReplan)" corresponded to the standard RT with a replanning at 20 Gy; "Pretreatment library ART" using a planning library based on the three planning CTs; and the "Evolutive library ART", which was the "Pretreatment library ART" strategy enriched by including some CBCT anatomies into the library when the daily clinical target volume (CTV) shape differed from the ones in the library. Two planning target volume (PTV) margins of 7 and 10 mm were evaluated. All the strategies were geometrically compared in terms of the percentage of coverage by the PTV, for the CTV and the organs at risk (OAR) delineated on the CBCT. Inadequate coverage of the CTV and OARs by the PTV was also assessed using deformable image registration. The cumulated dose distributions of each strategy were likewise estimated and compared for one patient. The "Evolutive library ART" strategy involved a number of added CBCTs: 0 for 55%; 1 for 30%; 2 for 5%; and 3 for 10% of patients. Compared with the other four, this strategy provided the highest CTV geometric coverage by the PTV, with a mean (min-max) coverage of 98.5% (96.4-100) for 10 mm margins and 96.2% (93.0-99.7) for 7 mm margins (P < 0.05). Moreover, this strategy significantly decreased the geometric coverage of the bowel

  10. Microstructure and microtexture evolutions of deformed oxide layers on a hot-rolled microalloyed steel

    International Nuclear Information System (INIS)

    Yu, Xianglong; Jiang, Zhengyi; Zhao, Jingwei; Wei, Dongbin; Zhou, Cunlong; Huang, Qingxue

    2015-01-01

    Highlights: • Microtexture development of deformed oxide layers is investigated. • Magnetite shares the {0 0 1} fibre texture with wustite. • Hematite develops the {0 0 0 1} basal fibre parallel to the oxide growth. • Stress relief and ion vacancy diffusion mechanism for magnetite seam. - Abstract: Electron backscatter diffraction (EBSD) analysis has been presented to investigate the microstructure and microtexture evolutions of deformed oxide scale formed on a microalloyed steel during hot rolling and accelerated cooling. Magnetite and wustite in oxide layers share a strong {0 0 1} and a weak {1 1 0} fibres texture parallel to the oxide growth. Trigonal hematite develops the {0 0 0 1} basal fibre parallel to the crystallographic plane {1 1 1} in magnetite. Taylor factor estimates have been conducted to elucidate the microtexture evolution. The fine-grained magnetite seam adjacent to the substrate is governed by stress relief and ions vacancy diffusion mechanism

  11. High performance computing of density matrix renormalization group method for 2-dimensional model. Parallelization strategy toward peta computing

    International Nuclear Information System (INIS)

    Yamada, Susumu; Igarashi, Ryo; Machida, Masahiko; Imamura, Toshiyuki; Okumura, Masahiko; Onishi, Hiroaki

    2010-01-01

    We parallelize the density matrix renormalization group (DMRG) method, which is a ground-state solver for one-dimensional quantum lattice systems. The parallelization allows us to extend the applicable range of the DMRG to n-leg ladders i.e., quasi two-dimension cases. Such an extension is regarded to bring about several breakthroughs in e.g., quantum-physics, chemistry, and nano-engineering. However, the straightforward parallelization requires all-to-all communications between all processes which are unsuitable for multi-core systems, which is a mainstream of current parallel computers. Therefore, we optimize the all-to-all communications by the following two steps. The first one is the elimination of the communications between all processes by only rearranging data distribution with the communication data amount kept. The second one is the avoidance of the communication conflict by rescheduling the calculation and the communication. We evaluate the performance of the DMRG method on multi-core supercomputers and confirm that our two-steps tuning is quite effective. (author)

  12. Development and benchmark verification of a parallelized Monte Carlo burnup calculation program MCBMPI

    International Nuclear Information System (INIS)

    Yang Wankui; Liu Yaoguang; Ma Jimin; Yang Xin; Wang Guanbo

    2014-01-01

    MCBMPI, a parallelized burnup calculation program, was developed. The program is modularized. Neutron transport calculation module employs the parallelized MCNP5 program MCNP5MPI, and burnup calculation module employs ORIGEN2, with the MPI parallel zone decomposition strategy. The program system only consists of MCNP5MPI and an interface subroutine. The interface subroutine achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, data exchanging with MCNP5MPI. Also, the program was verified with the Pressurized Water Reactor (PWR) cell burnup benchmark, the results showed that it's capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)

  13. Sympatric parallel diversification of major oak clades in the Americas and the origins of Mexican species diversity.

    Science.gov (United States)

    Hipp, Andrew L; Manos, Paul S; González-Rodríguez, Antonio; Hahn, Marlene; Kaproth, Matthew; McVay, John D; Avalos, Susana Valencia; Cavender-Bares, Jeannine

    2018-01-01

    Oaks (Quercus, Fagaceae) are the dominant tree genus of North America in species number and biomass, and Mexico is a global center of oak diversity. Understanding the origins of oak diversity is key to understanding biodiversity of northern temperate forests. A phylogenetic study of biogeography, niche evolution and diversification patterns in Quercus was performed using 300 samples, 146 species. Next-generation sequencing data were generated using the restriction-site associated DNA (RAD-seq) method. A time-calibrated maximum likelihood phylogeny was inferred and analyzed with bioclimatic, soils, and leaf habit data to reconstruct the biogeographic and evolutionary history of the American oaks. Our highly resolved phylogeny demonstrates sympatric parallel diversification in climatic niche, leaf habit, and diversification rates. The two major American oak clades arose in what is now the boreal zone and radiated, in parallel, from eastern North America into Mexico and Central America. Oaks adapted rapidly to niche transitions. The Mexican oaks are particularly numerous, not because Mexico is a center of origin, but because of high rates of lineage diversification associated with high rates of evolution along moisture gradients and between the evergreen and deciduous leaf habits. Sympatric parallel diversification in the oaks has shaped the diversity of North American forests. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  14. Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation

    Science.gov (United States)

    Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.

    1996-01-01

    We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.

  15. Cell verification of parallel burnup calculation program MCBMPI based on MPI

    International Nuclear Information System (INIS)

    Yang Wankui; Liu Yaoguang; Ma Jimin; Wang Guanbo; Yang Xin; She Ding

    2014-01-01

    The parallel burnup calculation program MCBMPI was developed. The program was modularized. The parallel MCNP5 program MCNP5MPI was employed as neutron transport calculation module. And a composite of three solution methods was used to solve burnup equation, i.e. matrix exponential technique, TTA analytical solution, and Gauss Seidel iteration. MPI parallel zone decomposition strategy was concluded in the program. The program system only consists of MCNP5MPI and burnup subroutine. The latter achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, and data exchanging with MCNP5MPI. Also, the program was verified with the pressurized water reactor (PWR) cell burnup benchmark. The results show that it,s capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)

  16. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  17. Evolution 2.0

    DEFF Research Database (Denmark)

    Andersen, Casper; Bek-Thomsen, Jakob; Clasen, Mathias

    2013-01-01

    Studies in the history of science and education have documented that the reception and understanding of evolutionary theory is highly contingent on local factors such as school systems, cultural traditions, religious beliefs, and language. This has important implications for teaching evolution...... audiences readily available. As more and more schools require teachers to use low cost or free web-based materials, in the research community we need to take seriously how to facilitate that demand in communication strategies on evolution. This article addresses this challenge by presenting the learning...

  18. DC-Analyzer-facilitated combinatorial strategy for rapid directed evolution of functional enzymes with multiple mutagenesis sites.

    Science.gov (United States)

    Wang, Xiong; Zheng, Kai; Zheng, Huayu; Nie, Hongli; Yang, Zujun; Tang, Lixia

    2014-12-20

    Iterative saturation mutagenesis (ISM) has been shown to be a powerful method for directed evolution. In this study, the approach was modified (termed M-ISM) by combining the single-site saturation mutagenesis method with a DC-Analyzer-facilitated combinatorial strategy, aiming to evolve novel biocatalysts efficiently in the case where multiple sites are targeted simultaneously. Initially, all target sites were explored individually by constructing single-site saturation mutagenesis libraries. Next, the top two to four variants in each library were selected and combined using the DC-Analyzer-facilitated combinatorial strategy. In addition to site-saturation mutagenesis, iterative saturation mutagenesis also needed to be performed. The advantages of M-ISM over ISM were that the screening effort is greatly reduced, and the entire M-ISM procedure was less time-consuming. The M-ISM strategy was successfully applied to the randomization of halohydrin dehalogenase from Agrobacterium radiobacter AD1 (HheC) when five interesting sites were targeted simultaneously. After screening 900 clones in total, six positive mutants were obtained. These mutants exhibited 4.0- to 9.3-fold higher k(cat) values than did the wild-type HheC toward 1,3-dichloro-2-propanol. However, with the ISM strategy, the best hit showed a 5.9-fold higher k(cat) value toward 1,3-DCP than the wild-type HheC, which was obtained after screening 4000 clones from four rounds of mutagenesis. Therefore, M-ISM could serve as a simple and efficient version of ISM for the randomization of target genes with multiple positions of interest.

  19. Agent-based models of strategies for the emergence and evolution of grammatical agreement.

    Directory of Open Access Journals (Sweden)

    Katrien Beuls

    Full Text Available Grammatical agreement means that features associated with one linguistic unit (for example number or gender become associated with another unit and then possibly overtly expressed, typically with morphological markers. It is one of the key mechanisms used in many languages to show that certain linguistic units within an utterance grammatically depend on each other. Agreement systems are puzzling because they can be highly complex in terms of what features they use and how they are expressed. Moreover, agreement systems have undergone considerable change in the historical evolution of languages. This article presents language game models with populations of agents in order to find out for what reasons and by what cultural processes and cognitive strategies agreement systems arise. It demonstrates that agreement systems are motivated by the need to minimize combinatorial search and semantic ambiguity, and it shows, for the first time, that once a population of agents adopts a strategy to invent, acquire and coordinate meaningful markers through social learning, linguistic self-organization leads to the spontaneous emergence and cultural transmission of an agreement system. The article also demonstrates how attested grammaticalization phenomena, such as phonetic reduction and conventionalized use of agreement markers, happens as a side effect of additional economizing principles, in particular minimization of articulatory effort and reduction of the marker inventory. More generally, the article illustrates a novel approach for studying how key features of human languages might emerge.

  20. Flexibility and Performance of Parallel File Systems

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1996-01-01

    As we gain experience with parallel file systems, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk-management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make applications difficult to port. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all platforms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application-programmer interfaces (API's). We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.

  1. Parallel force assay for protein-protein interactions.

    Science.gov (United States)

    Aschenbrenner, Daniela; Pippig, Diana A; Klamecka, Kamila; Limmer, Katja; Leonhardt, Heinrich; Gaub, Hermann E

    2014-01-01

    Quantitative proteome research is greatly promoted by high-resolution parallel format assays. A characterization of protein complexes based on binding forces offers an unparalleled dynamic range and allows for the effective discrimination of non-specific interactions. Here we present a DNA-based Molecular Force Assay to quantify protein-protein interactions, namely the bond between different variants of GFP and GFP-binding nanobodies. We present different strategies to adjust the maximum sensitivity window of the assay by influencing the binding strength of the DNA reference duplexes. The binding of the nanobody Enhancer to the different GFP constructs is compared at high sensitivity of the assay. Whereas the binding strength to wild type and enhanced GFP are equal within experimental error, stronger binding to superfolder GFP is observed. This difference in binding strength is attributed to alterations in the amino acids that form contacts according to the crystal structure of the initial wild type GFP-Enhancer complex. Moreover, we outline the potential for large-scale parallelization of the assay.

  2. Chromosomal Evolution in Chiroptera.

    Science.gov (United States)

    Sotero-Caio, Cibele G; Baker, Robert J; Volleth, Marianne

    2017-10-13

    Chiroptera is the second largest order among mammals, with over 1300 species in 21 extant families. The group is extremely diverse in several aspects of its natural history, including dietary strategies, ecology, behavior and morphology. Bat genomes show ample chromosome diversity (from 2n = 14 to 62). As with other mammalian orders, Chiroptera is characterized by clades with low, moderate and extreme chromosomal change. In this article, we will discuss trends of karyotypic evolution within distinct bat lineages (especially Phyllostomidae, Hipposideridae and Rhinolophidae), focusing on two perspectives: evolution of genome architecture, modes of chromosomal evolution, and the use of chromosome data to resolve taxonomic problems.

  3. Chromosomal Evolution in Chiroptera

    Directory of Open Access Journals (Sweden)

    Cibele G. Sotero-Caio

    2017-10-01

    Full Text Available Chiroptera is the second largest order among mammals, with over 1300 species in 21 extant families. The group is extremely diverse in several aspects of its natural history, including dietary strategies, ecology, behavior and morphology. Bat genomes show ample chromosome diversity (from 2n = 14 to 62. As with other mammalian orders, Chiroptera is characterized by clades with low, moderate and extreme chromosomal change. In this article, we will discuss trends of karyotypic evolution within distinct bat lineages (especially Phyllostomidae, Hipposideridae and Rhinolophidae, focusing on two perspectives: evolution of genome architecture, modes of chromosomal evolution, and the use of chromosome data to resolve taxonomic problems.

  4. Modeling SOL evolution during disruptions

    International Nuclear Information System (INIS)

    Rognlien, T.D.; Cohen, R.H.; Crotinger, J.A.

    1996-01-01

    We present the status of our models and transport simulations of the 2-D evolution of the scrape-off layer (SOL) during tokamak disruptions. This evolution is important for several reasons: It determines how the power from the core plasma is distributed on material surfaces, how impurities from those surfaces or from gas injection migrate back to the core region, and what are the properties of the SOL for carrying halo currents. We simulate this plasma in a time-dependent fashion using the SOL transport code UEDGE. This code models the SOL plasma using fluid equations of plasma density, parallel momentum (along the magnetic field), electron energy, ion energy, and neutral gas density. A multispecies model is used to follow the density of different charge-states of impurities. The parallel transport is classical but with kinetic modifications; these are presently treated by flux limits, but we have initiated more sophisticated models giving the correct long-mean-free path limit. The cross-field transport is anomalous, and one of the results of this work is to determine reasonable values to characterize disruptions. Our primary focus is on the initial thermal quench phase when most of the core energy is lost, but the total current is maintained. The impact of edge currents on the MHD equilibrium will be discussed

  5. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    Energy Technology Data Exchange (ETDEWEB)

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  6. Genome evolution in an ancient bacteria-ant symbiosis: parallel gene loss among Blochmannia spanning the origin of the ant tribe Camponotini

    Directory of Open Access Journals (Sweden)

    Laura E. Williams

    2015-04-01

    Full Text Available Stable associations between bacterial endosymbionts and insect hosts provide opportunities to explore genome evolution in the context of established mutualisms and assess the roles of selection and genetic drift across host lineages and habitats. Blochmannia, obligate endosymbionts of ants of the tribe Camponotini, have coevolved with their ant hosts for ∼40 MY. To investigate early events in Blochmannia genome evolution across this ant host tribe, we sequenced Blochmannia from two divergent host lineages, Colobopsis obliquus and Polyrhachis turneri, and compared them with four published genomes from Blochmannia of Camponotus sensu stricto. Reconstructed gene content of the last common ancestor (LCA of these six Blochmannia genomes is reduced (690 protein coding genes, consistent with rapid gene loss soon after establishment of the symbiosis. Differential gene loss among Blochmannia lineages has affected cellular functions and metabolic pathways, including DNA replication and repair, vitamin biosynthesis and membrane proteins. Blochmannia of P. turneri (i.e., B. turneri encodes an intact DnaA chromosomal replication initiation protein, demonstrating that loss of dnaA was not essential for establishment of the symbiosis. Based on gene content, B. obliquus and B. turneri are unable to provision hosts with riboflavin. Of the six sequenced Blochmannia, B. obliquus is the earliest diverging lineage (i.e., the sister group of other Blochmannia sampled and encodes the fewest protein-coding genes and the most pseudogenes. We identified 55 genes involved in parallel gene loss, including glutamine synthetase, which may participate in nitrogen recycling. Pathways for biosynthesis of coenzyme A, terpenoids and riboflavin were lost in multiple lineages, suggesting relaxed selection on the pathway after inactivation of one component. Analysis of Illumina read datasets did not detect evidence of plasmids encoding missing functions, nor the presence of

  7. Multi-objective based on parallel vector evaluated particle swarm optimization for optimal steady-state performance of power systems

    DEFF Research Database (Denmark)

    Vlachogiannis, Ioannis (John); Lee, K Y

    2009-01-01

    In this paper the state-of-the-art extended particle swarm optimization (PSO) methods for solving multi-objective optimization problems are represented. We emphasize in those, the co-evolution technique of the parallel vector evaluated PSO (VEPSO), analysed and applied in a multi-objective problem...

  8. Time-dependent deterministic transport on parallel architectures using PARTISN

    International Nuclear Information System (INIS)

    Alcouffe, R.E.; Baker, R.S.

    1998-01-01

    In addition to the ability to solve the static transport equation, the authors have also incorporated time dependence into the parallel S N code PARTISN. Using a semi-implicit scheme, PARTISN is capable of performing time-dependent calculations for both fissioning and pure source driven problems. They have applied this to various types of problems such as shielding and prompt fission experiments. This paper describes the form of the time-dependent equations implemented, their solution strategies in PARTISN including iteration acceleration, and the strategies used for time-step control. Results are presented for a iron-water shielding calculation and a criticality excursion in a uranium solution configuration

  9. Flexible operation of parallel grid-connecting converters under unbalanced grid voltage

    DEFF Research Database (Denmark)

    Lu, Jinghang; Savaghebi, Mehdi; Guerrero, Josep M.

    2017-01-01

    -link voltage ripple, and overloading. Moreover, under grid voltage unbalance, the active power delivery ability is decreased due to the converter's current rating limitation. In this paper, a thorough study on the current limitation of the grid-connecting converter under grid voltage unbalance is conducted....... In addition, based on the principle that total output active power should be oscillation free, a coordinated control strategy is proposed for the parallel grid-connecting converters. The case study has been conducted to demonstrate the effectiveness of this proposed control strategy....

  10. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  11. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  12. Cultural evolution as a nonstationary stochastic process

    DEFF Research Database (Denmark)

    Nicholson, Arwen; Sibani, Paolo

    2016-01-01

    We present an individual based model of cultural evolution, where interacting agents are coded by binary strings standing for strategies for action, blueprints for products or attitudes and beliefs. The model is patterned on an established model of biological evolution, the Tangled Nature Model...... (TNM), where a “tangle” of interactions between agents determines their reproductive success. In addition, our agents also have the ability to copy part of each other's strategy, a feature inspired by the Axelrod model of cultural diversity. Unlike the latter, but similarly to the TNM, the model...

  13. Energy and fuel efficient parallel mild hybrids for urban roads

    International Nuclear Information System (INIS)

    Babu, Ajay; Ashok, S.

    2016-01-01

    Highlights: • Energy and fuel savings depend on battery charge variations and the vehicle speed parameters. • Indian urban conditions provide lot of scope for energy and fuel savings in mild hybrids. • Energy saving strategy has lower payback periods than the fuel saving one in mild hybrids. • Sensitivity to parameter variations is the least for energy saving strategy in a mild hybrid. - Abstract: Fuel economy improvements and battery energy savings can promote the adoption of parallel mild hybrids for urban driving conditions. The aim of this study is to establish these benefits through two operating modes: an energy saving mode and a fuel saving mode. The performances of a typical parallel mild hybrid using these modes were analysed over urban driving cycles, in the US, Europe, and India, with a particular focus on the Indian urban conditions. The energy pack available from the proposed energy-saving operating mode, in addition to the energy already available from the conventional mode, was observed to be the highest for the representative urban driving cycle of the US. The extra energy pack available was found to be approximately 21.9 times that available from the conventional mode. By employing the proposed fuel saving operating mode, the fuel economy improvement achievable in New York City was observed to be approximately 22.69% of the fuel economy with the conventional strategy. The energy saving strategy was found to possess the lowest payback periods and highest immunity to variations in various cost parameters.

  14. GRAPES: a software for parallel searching on biological graphs targeting multi-core architectures.

    Directory of Open Access Journals (Sweden)

    Rosalba Giugno

    Full Text Available Biological applications, from genomics to ecology, deal with graphs that represents the structure of interactions. Analyzing such data requires searching for subgraphs in collections of graphs. This task is computationally expensive. Even though multicore architectures, from commodity computers to more advanced symmetric multiprocessing (SMP, offer scalable computing power, currently published software implementations for indexing and graph matching are fundamentally sequential. As a consequence, such software implementations (i do not fully exploit available parallel computing power and (ii they do not scale with respect to the size of graphs in the database. We present GRAPES, software for parallel searching on databases of large biological graphs. GRAPES implements a parallel version of well-established graph searching algorithms, and introduces new strategies which naturally lead to a faster parallel searching system especially for large graphs. GRAPES decomposes graphs into subcomponents that can be efficiently searched in parallel. We show the performance of GRAPES on representative biological datasets containing antiviral chemical compounds, DNA, RNA, proteins, protein contact maps and protein interactions networks.

  15. Adaptive neighbor connection for PRMs: A natural fit for heterogeneous environments and parallelism

    KAUST Repository

    Ekenna, Chinwe

    2013-11-01

    Probabilistic Roadmap Methods (PRMs) are widely used motion planning methods that sample robot configurations (nodes) and connect them to form a graph (roadmap) containing feasible trajectories. Many PRM variants propose different strategies for each of the steps and choosing among them is problem dependent. Planning in heterogeneous environments and/or on parallel machines necessitates dividing the problem into regions where these choices have to be made for each one. Hand-selecting the best method for each region becomes infeasible. In particular, there are many ways to select connection candidates, and choosing the appropriate strategy is input dependent. In this paper, we present a general connection framework that adaptively selects a neighbor finding strategy from a candidate set of options. Our framework learns which strategy to use by examining their success rates and costs. It frees the user of the burden of selecting the best strategy and allows the selection to change over time. We perform experiments on rigid bodies of varying geometry and articulated linkages up to 37 degrees of freedom. Our results show that strategy performance is indeed problem/region dependent, and our adaptive method harnesses their strengths. Over all problems studied, our method differs the least from manual selection of the best method, and if one were to manually select a single method across all problems, the performance can be quite poor. Our method is able to adapt to changing sampling density and learns different strategies for each region when the problem is partitioned for parallelism. © 2013 IEEE.

  16. Adaptive neighbor connection for PRMs: A natural fit for heterogeneous environments and parallelism

    KAUST Repository

    Ekenna, Chinwe; Jacobs, Sam Ade; Thomas, Shawna; Amato, Nancy M.

    2013-01-01

    Probabilistic Roadmap Methods (PRMs) are widely used motion planning methods that sample robot configurations (nodes) and connect them to form a graph (roadmap) containing feasible trajectories. Many PRM variants propose different strategies for each of the steps and choosing among them is problem dependent. Planning in heterogeneous environments and/or on parallel machines necessitates dividing the problem into regions where these choices have to be made for each one. Hand-selecting the best method for each region becomes infeasible. In particular, there are many ways to select connection candidates, and choosing the appropriate strategy is input dependent. In this paper, we present a general connection framework that adaptively selects a neighbor finding strategy from a candidate set of options. Our framework learns which strategy to use by examining their success rates and costs. It frees the user of the burden of selecting the best strategy and allows the selection to change over time. We perform experiments on rigid bodies of varying geometry and articulated linkages up to 37 degrees of freedom. Our results show that strategy performance is indeed problem/region dependent, and our adaptive method harnesses their strengths. Over all problems studied, our method differs the least from manual selection of the best method, and if one were to manually select a single method across all problems, the performance can be quite poor. Our method is able to adapt to changing sampling density and learns different strategies for each region when the problem is partitioned for parallelism. © 2013 IEEE.

  17. Ising ferromagnet: zero-temperature dynamic evolution

    International Nuclear Information System (INIS)

    Oliveira, P M C de; Newman, C M; Sidoravicious, V; Stein, D L

    2006-01-01

    The dynamic evolution at zero temperature of a uniform Ising ferromagnet on a square lattice is followed by Monte Carlo computer simulations. The system always eventually reaches a final, absorbing state, which sometimes coincides with a ground state (all spins parallel), and sometimes does not (parallel stripes of spins up and down). We initiate here the numerical study of 'chaotic time dependence' (CTD) by seeing how much information about the final state is predictable from the randomly generated quenched initial state. CTD was originally proposed to explain how nonequilibrium spin glasses could manifest an equilibrium pure state structure, but in simpler systems such as homogeneous ferromagnets it is closely related to long-term predictability and our results suggest that CTD might indeed occur in the infinite volume limit

  18. Progress in strategies for sequence diversity library creation for ...

    African Journals Online (AJOL)

    As the simplest technique of protein engineering, directed evolution has been ... An experiment of directed evolution comprises mutant libraries creation and ... evolution, sequence diversity creation, novel strategy, computational design, ...

  19. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  20. Patterns of gene flow and selection across multiple species of Acrocephalus warblers: footprints of parallel selection on the Z chromosome

    Czech Academy of Sciences Publication Activity Database

    Reifová, R.; Majerová, V.; Reif, J.; Ahola, M.; Lindholm, A.; Procházka, Petr

    2016-01-01

    Roč. 16, č. 130 (2016), s. 130 ISSN 1471-2148 Institutional support: RVO:68081766 Keywords : Adaptive radiation * Speciation * Gene flow * Parallel adaptive evolution * Z chromosome * Acrocephalus warblers Subject RIV: EG - Zoology Impact factor: 3.221, year: 2016

  1. Cognitive Function, Origin, and Evolution of Musical Emotions

    Directory of Open Access Journals (Sweden)

    Leonid Perlovsky

    2013-12-01

    Full Text Available Cognitive function of music, its origin, and evolution has been a mystery until recently. Here we discuss a theory of a fundamental function of music in cognition and culture. Music evolved in parallel with language. The evolution of language toward a semantically powerful tool required freeing from uncontrolled emotions. Knowledge evolved fast along with language. This created cognitive dissonances, contradictions among knowledge and instincts, which differentiated consciousness. To sustain evolution of language and culture, these contradictions had to be unified. Music was the mechanism of unification. Differentiated emotions are needed for resolving cognitive dissonances. As knowledge has been accumulated, contradictions multiplied and correspondingly more varied emotions had to evolve. While language differentiated psyche, music unified it. Thus the need for refined musical emotions in the process of cultural evolution is grounded in fundamental mechanisms of cognition. This is why today's human mind and cultures cannot exist without today's music.

  2. Vectorization and parallelization of the finite strip method for dynamic Mindlin plate problems

    Science.gov (United States)

    Chen, Hsin-Chu; He, Ai-Fang

    1993-01-01

    The finite strip method is a semi-analytical finite element process which allows for a discrete analysis of certain types of physical problems by discretizing the domain of the problem into finite strips. This method decomposes a single large problem into m smaller independent subproblems when m harmonic functions are employed, thus yielding natural parallelism at a very high level. In this paper we address vectorization and parallelization strategies for the dynamic analysis of simply-supported Mindlin plate bending problems and show how to prevent potential conflicts in memory access during the assemblage process. The vector and parallel implementations of this method and the performance results of a test problem under scalar, vector, and vector-concurrent execution modes on the Alliant FX/80 are also presented.

  3. Parallelized preconditioned BiCGStab solution of sparse linear system equations in F-COBRA-TF

    International Nuclear Information System (INIS)

    Geemert, Rene van; Glück, Markus; Riedmann, Michael; Gabriel, Harry

    2011-01-01

    Recently, the in-house development of a preconditioned and parallelized BiCGStab solver has been pursued successfully in AREVA’s advanced sub-channel code F-COBRA-TF. This solver can be run either in a sequential computation mode on a single CPU, or in a parallel computation mode on multiple parallel CPUs. The developed procedure enables the computation of several thousands of successive sparse linear system solutions in F-COBRA-TF with acceptable wall clock run times. The current paper provides general information about F-COBRA-TF in terms of modeling capabilities and application areas, and points out where the relevance arises for the efficient iterative solution of sparse linear systems. Furthermore, the preconditioning and parallelization strategies in the developed BiCGStab iterative solution approach are discussed. The paper is concluded with a number of verification examples. (author)

  4. Convergent adaptive evolution in marginal environments: unloading transposable elements as a common strategy among mangrove genomes.

    Science.gov (United States)

    Lyu, Haomin; He, Ziwen; Wu, Chung-I; Shi, Suhua

    2018-01-01

    Several clades of mangrove trees independently invade the interface between land and sea at the margin of woody plant distribution. As phenotypic convergence among mangroves is common, the possibility of convergent adaptation in their genomes is quite intriguing. To study this molecular convergence, we sequenced multiple mangrove genomes. In this study, we focused on the evolution of transposable elements (TEs) in relation to the genome size evolution. TEs, generally considered genomic parasites, are the most common components of woody plant genomes. Analyzing the long terminal repeat-retrotransposon (LTR-RT) type of TE, we estimated their death rates by counting solo-LTRs and truncated elements. We found that all lineages of mangroves massively and convergently reduce TE loads in comparison to their nonmangrove relatives; as a consequence, genome size reduction happens independently in all six mangrove lineages; TE load reduction in mangroves can be attributed to the paucity of young elements; the rarity of young LTR-RTs is a consequence of fewer births rather than access death. In conclusion, mangrove genomes employ a convergent strategy of TE load reduction by suppressing element origination in their independent adaptation to a new environment. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  5. What is adaptive about adaptive decision making? A parallel constraint satisfaction account.

    Science.gov (United States)

    Glöckner, Andreas; Hilbig, Benjamin E; Jekel, Marc

    2014-12-01

    There is broad consensus that human cognition is adaptive. However, the vital question of how exactly this adaptivity is achieved has remained largely open. Herein, we contrast two frameworks which account for adaptive decision making, namely broad and general single-mechanism accounts vs. multi-strategy accounts. We propose and fully specify a single-mechanism model for decision making based on parallel constraint satisfaction processes (PCS-DM) and contrast it theoretically and empirically against a multi-strategy account. To achieve sufficiently sensitive tests, we rely on a multiple-measure methodology including choice, reaction time, and confidence data as well as eye-tracking. Results show that manipulating the environmental structure produces clear adaptive shifts in choice patterns - as both frameworks would predict. However, results on the process level (reaction time, confidence), in information acquisition (eye-tracking), and from cross-predicting choice consistently corroborate single-mechanisms accounts in general, and the proposed parallel constraint satisfaction model for decision making in particular. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Sequential and parallel image restoration: neural network implementations.

    Science.gov (United States)

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  7. Tuning HDF5 subfiling performance on parallel file systems

    Energy Technology Data Exchange (ETDEWEB)

    Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chaarawi, Mohamad [Intel Corp. (United States); Koziol, Quincey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mainzer, John [The HDF Group (United States); Willmore, Frank [The HDF Group (United States)

    2017-05-12

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate and tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.

  8. A New Distribution Strategy : The Omnichannel Strategy

    Directory of Open Access Journals (Sweden)

    Mihaela Gabriela Belu

    2014-06-01

    Full Text Available In an increasingly globalized world, dependent on information technology, distribution companies are searching for new marketing models meant to enrich the consumer’s experience. Therefore, the evolution of new technologies, the changes in the consumer’s behaviour are the main factors that determine changes in the business model in the distribution field. The following article presents different forms of distribution, focusing on the omnichannel strategy. In the last part of the paper, the authors analyse the Romanian retail market, namely, the evolution of the market, its key competitors and the new distribution models adopted by retaileres in our country.

  9. A Circulating Current Suppression Method for Parallel Connected Voltage-Source-Inverters (VSI) with Common DC and AC Buses

    DEFF Research Database (Denmark)

    Wei, Baoze; Guerrero, Josep M.; Quintero, Juan Carlos Vasquez

    2016-01-01

    This paper describes a theoretical with experiment study on a control strategy for the parallel operation of threephase voltage source inverters (VSI), to be applied to uninterruptible power systems (UPS). A circulating current suppression strategy for parallel VSIs is proposed in this paper based...... on circulating current control loops used to modify the reference currents by compensating the error currents among parallel inverters. Both of the cross and zero-sequence circulating currents are considered. The proposed method is coordinated together with droop and virtual impedance control. In this paper......, droop control is used to generate the reference voltage of each inverter, and the virtual impedance is used to fix the output impedance of the inverters. In addition, a secondary control is used in order to recover the voltage deviation caused by the virtual impedance. And the auxiliary current control...

  10. Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan

    2009-10-01

    The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.

  11. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  12. Parallelization Issues and Particle-In Codes.

    Science.gov (United States)

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on

  13. A CS1 pedagogical approach to parallel thinking

    Science.gov (United States)

    Rague, Brian William

    Almost all collegiate programs in Computer Science offer an introductory course in programming primarily devoted to communicating the foundational principles of software design and development. The ACM designates this introduction to computer programming course for first-year students as CS1, during which methodologies for solving problems within a discrete computational context are presented. Logical thinking is highlighted, guided primarily by a sequential approach to algorithm development and made manifest by typically using the latest, commercially successful programming language. In response to the most recent developments in accessible multicore computers, instructors of these introductory classes may wish to include training on how to design workable parallel code. Novel issues arise when programming concurrent applications which can make teaching these concepts to beginning programmers a seemingly formidable task. Student comprehension of design strategies related to parallel systems should be monitored to ensure an effective classroom experience. This research investigated the feasibility of integrating parallel computing concepts into the first-year CS classroom. To quantitatively assess student comprehension of parallel computing, an experimental educational study using a two-factor mixed group design was conducted to evaluate two instructional interventions in addition to a control group: (1) topic lecture only, and (2) topic lecture with laboratory work using a software visualization Parallel Analysis Tool (PAT) specifically designed for this project. A new evaluation instrument developed for this study, the Perceptions of Parallelism Survey (PoPS), was used to measure student learning regarding parallel systems. The results from this educational study show a statistically significant main effect among the repeated measures, implying that student comprehension levels of parallel concepts as measured by the PoPS improve immediately after the delivery of

  14. Frame-Based and Subpicture-Based Parallelization Approaches of the HEVC Video Encoder

    Directory of Open Access Journals (Sweden)

    Héctor Migallón

    2018-05-01

    Full Text Available The most recent video coding standard, High Efficiency Video Coding (HEVC, is able to significantly improve the compression performance at the expense of a huge computational complexity increase with respect to its predecessor, H.264/AVC. Parallel versions of the HEVC encoder may help to reduce the overall encoding time in order to make it more suitable for practical applications. In this work, we study two parallelization strategies. One of them follows a coarse-grain approach, where parallelization is based on frames, and the other one follows a fine-grain approach, where parallelization is performed at subpicture level. Two different frame-based approaches have been developed. The first one only uses MPI and the second one is a hybrid MPI/OpenMP algorithm. An exhaustive experimental test was carried out to study the performance of both approaches in order to find out the best setup in terms of parallel efficiency and coding performance. Both frame-based and subpicture-based approaches are compared under the same hardware platform. Although subpicture-based schemes provide an excellent performance with high-resolution video sequences, scalability is limited by resolution, and the coding performance worsens by increasing the number of processes. Conversely, the proposed frame-based approaches provide the best results with respect to both parallel performance (increasing scalability and coding performance (not degrading the rate/distortion behavior.

  15. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  16. Discontinuous interleaving of parallel inverters for efficiency improvement

    DEFF Research Database (Denmark)

    Rannestad, Bjørn; Munk-Nielsen, Stig; Gadgaard, Kristian

    2017-01-01

    Interleaved switching of parallel inverters has previously been proposed for efficiency/size improvements of grid connected three-phase inverters. This paper proposes a novel interleaving method which practically eliminates insulated gate bipolar transistor (IGBT) turn-on losses and drastically...... overall power module losses are reduced. The modulation strategy is suited for converters with doubly fed induction generators (DFIG) for wind turbines, but are not limited hereto. Improvement of switching performance are measured and operational efficiency improvements are calculated and verified...

  17. Experimental evolution and the adjustment of metabolic strategies in lactic acid bacteria

    NARCIS (Netherlands)

    Bachmann, Herwig; Molenaar, Douwe; Branco dos Santos, Filipe; Teusink, Bas

    2017-01-01

    Experimental evolution of microbes has gained lots of interest in recent years, mainly due to the ease of strain characterisation through next-generation sequencing. While evolutionary and systems biologists use experimental evolution to address fundamental questions in their respective fields,

  18. Vlasov modelling of parallel transport in a tokamak scrape-off layer

    International Nuclear Information System (INIS)

    Manfredi, G; Hirstoaga, S; Devaux, S

    2011-01-01

    A one-dimensional Vlasov-Poisson model is used to describe the parallel transport in a tokamak scrape-off layer. Thanks to a recently developed 'asymptotic-preserving' numerical scheme, it is possible to lift numerical constraints on the time step and grid spacing, which are no longer limited by, respectively, the electron plasma period and Debye length. The Vlasov approach provides a good velocity-space resolution even in regions of low density. The model is applied to the study of parallel transport during edge-localized modes, with particular emphasis on the particles and energy fluxes on the divertor plates. The numerical results are compared with analytical estimates based on a free-streaming model, with good general agreement. An interesting feature is the observation of an early electron energy flux, due to suprathermal electrons escaping the ions' attraction. In contrast, the long-time evolution is essentially quasi-neutral and dominated by the ion dynamics.

  19. Vlasov modelling of parallel transport in a tokamak scrape-off layer

    Energy Technology Data Exchange (ETDEWEB)

    Manfredi, G [Institut de Physique et Chimie des Materiaux, CNRS and Universite de Strasbourg, BP 43, F-67034 Strasbourg (France); Hirstoaga, S [INRIA Nancy Grand-Est and Institut de Recherche en Mathematiques Avancees, 7 rue Rene Descartes, F-67084 Strasbourg (France); Devaux, S, E-mail: Giovanni.Manfredi@ipcms.u-strasbg.f, E-mail: hirstoaga@math.unistra.f, E-mail: Stephane.Devaux@ccfe.ac.u [JET-EFDA, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom)

    2011-01-15

    A one-dimensional Vlasov-Poisson model is used to describe the parallel transport in a tokamak scrape-off layer. Thanks to a recently developed 'asymptotic-preserving' numerical scheme, it is possible to lift numerical constraints on the time step and grid spacing, which are no longer limited by, respectively, the electron plasma period and Debye length. The Vlasov approach provides a good velocity-space resolution even in regions of low density. The model is applied to the study of parallel transport during edge-localized modes, with particular emphasis on the particles and energy fluxes on the divertor plates. The numerical results are compared with analytical estimates based on a free-streaming model, with good general agreement. An interesting feature is the observation of an early electron energy flux, due to suprathermal electrons escaping the ions' attraction. In contrast, the long-time evolution is essentially quasi-neutral and dominated by the ion dynamics.

  20. Cooperative parallel adaptive neighbourhood search for the disjunctively constrained knapsack problem

    Science.gov (United States)

    Quan, Zhe; Wu, Lei

    2017-09-01

    This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.

  1. Online Evolution for Multi-Action Adversarial Games

    OpenAIRE

    Justesen, Niels; Mahlmann, Tobias; Togelius, Julian

    2016-01-01

    We present Online Evolution, a novel method for playing turn-based multi-action adversarial games. Such games, which include most strategy games, have extremely high branching factors due to each turn having multiple actions. In Online Evolution, an evolutionary algorithm is used to evolve the combination of atomic actions that make up a single move, with a state evaluation function used for fitness. We implement Online Evolution for the turn-based multi-action game Hero Academy and compare i...

  2. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  3. Phenomenon in the Evolution of Voles (Mammalia, Rodentia, Arvicolidae

    Directory of Open Access Journals (Sweden)

    Rekovets L. I.

    2017-04-01

    Full Text Available This paper presents analytical results of the study of adaptatiogenesis within the family Arvicolidae (Mammalia, Rodentia based of morphological changes of the most functional characters of their masticatory apparatus — dental system — through time. The main directions of the morphological differentiation in parallel evolution of the arvicolid tooth type within the Cricetidae and Arvicolidae during late Miocene and Pliocene were identified and substantiated. It is shown that such unique morphological structure as the arvicolid tooth type has provided a relatively high rate of evolution of voles and a wide range of their adaptive radiation, as well as has determined their taxonomic and ecological diversity. The optimality of the current state of this group and evaluation of evolutionary prospects of Arvicolidae were presented and substantiated here as a phenomenon in their evolution.

  4. Evolution of strategies for modern rechargeable batteries.

    Science.gov (United States)

    Goodenough, John B

    2013-05-21

    This Account provides perspective on the evolution of the rechargeable battery and summarizes innovations in the development of these devices. Initially, I describe the components of a conventional rechargeable battery along with the engineering parameters that define the figures of merit for a single cell. In 1967, researchers discovered fast Na(+) conduction at 300 K in Na β,β''-alumina. Since then battery technology has evolved from a strongly acidic or alkaline aqueous electrolyte with protons as the working ion to an organic liquid-carbonate electrolyte with Li(+) as the working ion in a Li-ion battery. The invention of the sodium-sulfur and Zebra batteries stimulated consideration of framework structures as crystalline hosts for mobile guest alkali ions, and the jump in oil prices in the early 1970s prompted researchers to consider alternative room-temperature batteries with aprotic liquid electrolytes. With the existence of Li primary cells and ongoing research on the chemistry of reversible Li intercalation into layered chalcogenides, industry invested in the production of a Li/TiS2 rechargeable cell. However, on repeated recharge, dendrites grew across the electrolyte from the anode to the cathode, leading to dangerous short-circuits in the cell in the presence of the flammable organic liquid electrolyte. Because lowering the voltage of the anode would prevent cells with layered-chalcogenide cathodes from competing with cells that had an aqueous electrolyte, researchers quickly abandoned this effort. However, once it was realized that an oxide cathode could offer a larger voltage versus lithium, researchers considered the extraction of Li from the layered LiMO2 oxides with M = Co or Ni. These oxide cathodes were fabricated in a discharged state, and battery manufacturers could not conceive of assembling a cell with a discharged cathode. Meanwhile, exploration of Li intercalation into graphite showed that reversible Li insertion into carbon occurred

  5. Learning strategies in excellent and average university students. Their evolution over the first year of the career

    Directory of Open Access Journals (Sweden)

    Gargallo, Bernardo

    2012-12-01

    Full Text Available The aim of this paper was to analyze the evolution of learning strategies of two groups of students, excellent and average, from 11 degrees of the UPV (Valencia/Spain in their freshman year. We used the CEVEAPEU questionnaire. The results confirmed the availability of better strategies of excellent students and also the existence of evolutionary patterns in which affective-emotional strategies decrease, such as value of the task or internal attributions, and that others increase, such as extrinsic motivation and external attributions. It seems that the student does not meet your expectations in the new context and professors have important responsibilities. El objetivo de este trabajo era analizar la evolución de las estrategias de aprendizaje de estudiantes excelentes y medios de 11 titulaciones de la V (Valencia, en su primer año. Los alumnos contestaron el cuestionario CEVEAPEU en tres momentos. Los resultados constataron mejores estrategias en los estudiantes excelentes. También confirmaron patrones evolutivos en que estrategias afectivo-emotivas relevantes disminuyen, como valor de la tarea o atribuciones internas, y se incrementan otras, como motivación extrínseca y atribuciones externas. Parece que el estudiante no satisface sus expectativas en el proceso de adaptación al nuevo contexto y ahí los profesores tienen responsabilidades ineludibles.

  6. Parallel force assay for protein-protein interactions.

    Directory of Open Access Journals (Sweden)

    Daniela Aschenbrenner

    Full Text Available Quantitative proteome research is greatly promoted by high-resolution parallel format assays. A characterization of protein complexes based on binding forces offers an unparalleled dynamic range and allows for the effective discrimination of non-specific interactions. Here we present a DNA-based Molecular Force Assay to quantify protein-protein interactions, namely the bond between different variants of GFP and GFP-binding nanobodies. We present different strategies to adjust the maximum sensitivity window of the assay by influencing the binding strength of the DNA reference duplexes. The binding of the nanobody Enhancer to the different GFP constructs is compared at high sensitivity of the assay. Whereas the binding strength to wild type and enhanced GFP are equal within experimental error, stronger binding to superfolder GFP is observed. This difference in binding strength is attributed to alterations in the amino acids that form contacts according to the crystal structure of the initial wild type GFP-Enhancer complex. Moreover, we outline the potential for large-scale parallelization of the assay.

  7. Parallel computing and molecular dynamics of biological membranes

    International Nuclear Information System (INIS)

    La Penna, G.; Letardi, S.; Minicozzi, V.; Morante, S.; Rossi, G.C.; Salina, G.

    1998-01-01

    In this talk I discuss the general question of the portability of molecular dynamics codes for diffusive systems on parallel computers of the APE family. The intrinsic single precision of the today available platforms does not seem to affect the numerical accuracy of the simulations, while the absence of integer addressing from CPU to individual nodes puts strong constraints on possible programming strategies. Liquids can be satisfactorily simulated using the ''systolic'' method. For more complex systems, like the biological ones at which we are ultimately interested in, the ''domain decomposition'' approach is best suited to beat the quadratic growth of the inter-molecular computational time with the number of atoms of the system. The promising perspectives of using this strategy for extensive simulations of lipid bilayers are briefly reviewed. (orig.)

  8. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  9. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  10. Adaptive social learning strategies in temporally and spatially varying environments : how temporal vs. spatial variation, number of cultural traits, and costs of learning influence the evolution of conformist-biased transmission, payoff-biased transmission, and individual learning.

    Science.gov (United States)

    Nakahashi, Wataru; Wakano, Joe Yuichiro; Henrich, Joseph

    2012-12-01

    Long before the origins of agriculture human ancestors had expanded across the globe into an immense variety of environments, from Australian deserts to Siberian tundra. Survival in these environments did not principally depend on genetic adaptations, but instead on evolved learning strategies that permitted the assembly of locally adaptive behavioral repertoires. To develop hypotheses about these learning strategies, we have modeled the evolution of learning strategies to assess what conditions and constraints favor which kinds of strategies. To build on prior work, we focus on clarifying how spatial variability, temporal variability, and the number of cultural traits influence the evolution of four types of strategies: (1) individual learning, (2) unbiased social learning, (3) payoff-biased social learning, and (4) conformist transmission. Using a combination of analytic and simulation methods, we show that spatial-but not temporal-variation strongly favors the emergence of conformist transmission. This effect intensifies when migration rates are relatively high and individual learning is costly. We also show that increasing the number of cultural traits above two favors the evolution of conformist transmission, which suggests that the assumption of only two traits in many models has been conservative. We close by discussing how (1) spatial variability represents only one way of introducing the low-level, nonadaptive phenotypic trait variation that so favors conformist transmission, the other obvious way being learning errors, and (2) our findings apply to the evolution of conformist transmission in social interactions. Throughout we emphasize how our models generate empirical predictions suitable for laboratory testing.

  11. Co-Evolution: Law and Institutions in International Ethics Research

    NARCIS (Netherlands)

    Millar-Schijf, Carla C.J.M.; Cheng, Philip Y.K.; Choi, Chong-Ju

    2009-01-01

    Despite the importance of the co-evolution approach in various branches of research, such as strategy, organisation theory, complexity, population ecology, technology and innovation (Lewin et al., 1999; March, 1991), co-evolution has been relatively neglected in international business and ethics

  12. Design of high-performance parallelized gene predictors in MATLAB.

    Science.gov (United States)

    Rivard, Sylvain Robert; Mailloux, Jean-Gabriel; Beguenane, Rachid; Bui, Hung Tien

    2012-04-10

    This paper proposes a method of implementing parallel gene prediction algorithms in MATLAB. The proposed designs are based on either Goertzel's algorithm or on FFTs and have been implemented using varying amounts of parallelism on a central processing unit (CPU) and on a graphics processing unit (GPU). Results show that an implementation using a straightforward approach can require over 4.5 h to process 15 million base pairs (bps) whereas a properly designed one could perform the same task in less than five minutes. In the best case, a GPU implementation can yield these results in 57 s. The present work shows how parallelism can be used in MATLAB for gene prediction in very large DNA sequences to produce results that are over 270 times faster than a conventional approach. This is significant as MATLAB is typically overlooked due to its apparent slow processing time even though it offers a convenient environment for bioinformatics. From a practical standpoint, this work proposes two strategies for accelerating genome data processing which rely on different parallelization mechanisms. Using a CPU, the work shows that direct access to the MEX function increases execution speed and that the PARFOR construct should be used in order to take full advantage of the parallelizable Goertzel implementation. When the target is a GPU, the work shows that data needs to be segmented into manageable sizes within the GFOR construct before processing in order to minimize execution time.

  13. Costly advertising and the evolution of cooperation

    OpenAIRE

    Brede, Markus

    2013-01-01

    In this paper, I investigate the co-evolution of fast and slow strategy spread and game strategies in populations of spatially distributed agents engaged in a one off evolutionary dilemma game. Agents are characterized by a pair of traits, a game strategy (cooperate or defect) and a binary 'advertising' strategy (advertise or don't advertise). Advertising, which comes at a cost [Formula: see text], allows investment into faster propagation of the agents' traits to adjacent individuals. Import...

  14. A hybrid method for the parallel computation of Green's functions

    DEFF Research Database (Denmark)

    Petersen, Dan Erik; Li, Song; Stokbro, Kurt

    2009-01-01

    of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds...... of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only...... require computing a small number of entries of the inverse matrix. Then. we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size....

  15. Towards a Universal Biology: Is the Origin and Evolution of Life Predictable?

    Science.gov (United States)

    Rothschild, Lynn J.

    2017-01-01

    The origin and evolution of life seems an unpredictable oddity, based on the quirks of contingency. Celebrated by the late Stephen Jay Gould in several books, "evolution by contingency" has all the adventure of a thriller, but lacks the predictive power of the physical sciences. Not necessarily so, replied Simon Conway Morris, for convergence reassures us that certain evolutionary responses are replicable. The outcome of this debate is critical to Astrobiology. How can we understand where we came from on Earth without prophesy? Further, we cannot design a rational strategy for the search for life elsewhere - or to understand what the future will hold for life on Earth and beyond - without extrapolating from pre-biotic chemistry and evolution. There are several indirect approaches to understanding, and thus describing, what life must be. These include philosophical approaches to defining life (is there even a satisfactory definition of life?), using what we know of physics, chemistry and life to imagine alternate scenarios, using different approaches that life takes as pseudoreplicates (e.g., ribosomal vs non-ribosomal protein synthesis), and experimental approaches to understand the art of the possible. Given that: (1) Life is a process based on physical components rather than simply an object; (2). Life is likely based on organic carbon and needs a solvent for chemistry, most likely water, and (3) Looking for convergence in terrestrial evolution we can predict certain tendencies, if not quite "laws", that provide predictive power. Biological history must obey the laws of physics and chemistry, the principles of natural selection, the constraints of an evolutionary past, genetics, and developmental biology. This amalgam creates a surprising amount of predictive power in the broad outline. Critical is the apparent prevalence of organic chemistry, and uniformity in the universe of the laws of chemistry and physics. Instructive is the widespread occurrence of

  16. Harmony Search Based Parameter Ensemble Adaptation for Differential Evolution

    Directory of Open Access Journals (Sweden)

    Rammohan Mallipeddi

    2013-01-01

    Full Text Available In differential evolution (DE algorithm, depending on the characteristics of the problem at hand and the available computational resources, different strategies combined with a different set of parameters may be effective. In addition, a single, well-tuned combination of strategies and parameters may not guarantee optimal performance because different strategies combined with different parameter settings can be appropriate during different stages of the evolution. Therefore, various adaptive/self-adaptive techniques have been proposed to adapt the DE strategies and parameters during the course of evolution. In this paper, we propose a new parameter adaptation technique for DE based on ensemble approach and harmony search algorithm (HS. In the proposed method, an ensemble of parameters is randomly sampled which form the initial harmony memory. The parameter ensemble evolves during the course of the optimization process by HS algorithm. Each parameter combination in the harmony memory is evaluated by testing them on the DE population. The performance of the proposed adaptation method is evaluated using two recently proposed strategies (DE/current-to-pbest/bin and DE/current-to-gr_best/bin as basic DE frameworks. Numerical results demonstrate the effectiveness of the proposed adaptation technique compared to the state-of-the-art DE based algorithms on a set of challenging test problems (CEC 2005.

  17. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  18. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  19. Design of an Input-Parallel Output-Parallel LLC Resonant DC-DC Converter System for DC Microgrids

    Science.gov (United States)

    Juan, Y. L.; Chen, T. R.; Chang, H. M.; Wei, S. E.

    2017-11-01

    Compared with the centralized power system, the distributed modularized power system is composed of several power modules with lower power capacity to provide a totally enough power capacity for the load demand. Therefore, the current stress of the power components in each module can then be reduced, and the flexibility of system setup is also enhanced. However, the parallel-connected power modules in the conventional system are usually controlled to equally share the power flow which would result in lower efficiency in low loading condition. In this study, a modular power conversion system for DC micro grid is developed with 48 V dc low voltage input and 380 V dc high voltage output. However, in the developed system control strategy, the numbers of power modules enabled to share the power flow is decided according to the output power at lower load demand. Finally, three 350 W power modules are constructed and parallel-connected to setup a modular power conversion system. From the experimental results, compared with the conventional system, the efficiency of the developed power system in the light loading condition is greatly improved. The modularized design of the power system can also decrease the power loss ratio to the system capacity.

  20. Hybrid parallel strategy for the simulation of fast transient accidental situations at reactor scale

    International Nuclear Information System (INIS)

    Faucher, V.; Galon, P.; Beccantini, A.; Crouzet, F.; Debaud, F.; Gautier, T.

    2015-01-01

    Highlights: • Reference accidental situations for current and future reactors are considered. • They require the modeling of complex fluid–structure systems at full reactor scale. • EPX software computes the non-linear transient solution with explicit time stepping. • Focus on the parallel hybrid solver specific to the proposed coupled equations. - Abstract: This contribution is dedicated to the latest methodological developments implemented in the fast transient dynamics software EUROPLEXUS (EPX) to simulate the mechanical response of fully coupled fluid–structure systems to accidental situations to be considered at reactor scale, among which the Loss of Coolant Accident, the Core Disruptive Accident and the Hydrogen Explosion. Time integration is explicit and the search for reference solutions within the safety framework prevents any simplification and approximations in the coupled algorithm: for instance, all kinematic constraints are dealt with using Lagrange Multipliers, yielding a complex flow chart when non-permanent constraints such as unilateral contact or immersed fluid–structure boundaries are considered. The parallel acceleration of the solution process is then achieved through a hybrid approach, based on a weighted domain decomposition for distributed memory computing and the use of the KAAPI library for self-balanced shared memory processing inside subdomains

  1. A hybrid parallel architecture for electrostatic interactions in the simulation of dissipative particle dynamics

    Science.gov (United States)

    Yang, Sheng-Chun; Lu, Zhong-Yuan; Qian, Hu-Jun; Wang, Yong-Lei; Han, Jie-Ping

    2017-11-01

    In this work, we upgraded the electrostatic interaction method of CU-ENUF (Yang, et al., 2016) which first applied CUNFFT (nonequispaced Fourier transforms based on CUDA) to the reciprocal-space electrostatic computation and made the computation of electrostatic interaction done thoroughly in GPU. The upgraded edition of CU-ENUF runs concurrently in a hybrid parallel way that enables the computation parallelizing on multiple computer nodes firstly, then further on the installed GPU in each computer. By this parallel strategy, the size of simulation system will be never restricted to the throughput of a single CPU or GPU. The most critical technical problem is how to parallelize a CUNFFT in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Furthermore, the upgraded method is capable of computing electrostatic interactions for both the atomistic molecular dynamics (MD) and the dissipative particle dynamics (DPD). Finally, the benchmarks conducted for validation and performance indicate that the upgraded method is able to not only present a good precision when setting suitable parameters, but also give an efficient way to compute electrostatic interactions for huge simulation systems. Program Files doi:http://dx.doi.org/10.17632/zncf24fhpv.1 Licensing provisions: GNU General Public License 3 (GPL) Programming language: C, C++, and CUDA C Supplementary material: The program is designed for effective electrostatic interactions of large-scale simulation systems, which runs on particular computers equipped with NVIDIA GPUs. It has been tested on (a) single computer node with Intel(R) Core(TM) i7-3770@ 3.40 GHz (CPU) and GTX 980 Ti (GPU), and (b) MPI parallel computer nodes with the same configurations. Nature of problem: For molecular dynamics simulation, the electrostatic interaction is the most time-consuming computation because of its long-range feature and slow convergence in simulation space

  2. Parallel statistical image reconstruction for cone-beam x-ray CT on a shared memory computation platform

    International Nuclear Information System (INIS)

    Kole, J S; Beekman, F J

    2005-01-01

    Statistical reconstruction methods offer possibilities of improving image quality as compared to analytical methods, but current reconstruction times prohibit routine clinical applications. To reduce reconstruction times we have parallelized a statistical reconstruction algorithm for cone-beam x-ray CT, the ordered subset convex algorithm (OSC), and evaluated it on a shared memory computer. Two different parallelization strategies were developed: one that employs parallelism by computing the work for all projections within a subset in parallel, and one that divides the total volume into parts and processes the work for each sub-volume in parallel. Both methods are used to reconstruct a three-dimensional mathematical phantom on two different grid densities. The reconstructed images are binary identical to the result of the serial (non-parallelized) algorithm. The speed-up factor equals approximately 30 when using 32 to 40 processors, and scales almost linearly with the number of cpus for both methods. The huge reduction in computation time allows us to apply statistical reconstruction to clinically relevant studies for the first time

  3. Iran's Sea Power Strategy: Goals and Evolution

    National Research Council Canada - National Science Library

    Walker, John

    1997-01-01

    This thesis examines the intent of Iran's sea power strategy using a multipart analysis including a historical review of the transition of Iran's naval power through the Iranian Revolution, Iran-Iraq...

  4. Five-year evolution of reperfusion strategies and early mortality in patients with ST-segment elevation myocardial infarction in France.

    Science.gov (United States)

    El Khoury, Carlos; Bochaton, Thomas; Flocard, Elodie; Serre, Patrice; Tomasevic, Danka; Mewton, Nathan; Bonnefoy-Cudraz, Eric

    2017-10-01

    To assess 5-year evolutions in reperfusion strategies and early mortality in patients with ST-segment elevation myocardial infarction. Using data from the French RESCUe network, we studied patients with ST-segment elevation myocardial infarction treated in mobile intensive care units between 2009 and 2013. Among 2418 patients (median age 62 years; 78.5% male), 2119 (87.6%) underwent primary percutaneous coronary intervention and 299 (12.4%) pre-hospital thrombolysis (94.0% of whom went on to undergo percutaneous coronary intervention). Use of primary percutaneous coronary intervention increased from 78.4% in 2009 to 95.9% in 2013 ( P trend 90 minutes delay group (83.0% in 2009 to 97.7% in 2013; P trend <0.001 versus 34.1% in 2009 to 79.2% in 2013; P trend <0.001). In-hospital (4-6%) and 30-day (6-8%) mortalities remained stable from 2009 to 2013. In the RESCUe network, the use of primary percutaneous coronary intervention increased from 2009 to 2013, in line with guidelines, but there was no evolution in early mortality.

  5. A Parallel Workload Model and its Implications for Processor Allocation

    Science.gov (United States)

    1996-11-01

    with SEV or AVG, both of which can tolerate c = 0.4 { 0.6 before their performance deteriorates signi cantly. On the other hand, Setia [10] has...Sanjeev. K Setia . The interaction between memory allocation and adaptive partitioning in message-passing multicomputers. In IPPS 󈨣 Workshop on Job...Scheduling Strategies for Parallel Processing, pages 89{99, 1995. [11] Sanjeev K. Setia and Satish K. Tripathi. An analysis of several processor

  6. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  7. Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems

    International Nuclear Information System (INIS)

    BAER, THOMAS A.; SACKINGER, PHILIP A.; SUBIA, SAMUEL R.

    1999-01-01

    Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance

  8. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  9. Analysis of ribosomal protein gene structures: implications for intron evolution.

    Directory of Open Access Journals (Sweden)

    2006-03-01

    Full Text Available Many spliceosomal introns exist in the eukaryotic nuclear genome. Despite much research, the evolution of spliceosomal introns remains poorly understood. In this paper, we tried to gain insights into intron evolution from a novel perspective by comparing the gene structures of cytoplasmic ribosomal proteins (CRPs and mitochondrial ribosomal proteins (MRPs, which are held to be of archaeal and bacterial origin, respectively. We analyzed 25 homologous pairs of CRP and MRP genes that together had a total of 527 intron positions. We found that all 12 of the intron positions shared by CRP and MRP genes resulted from parallel intron gains and none could be considered to be "conserved," i.e., descendants of the same ancestor. This was supported further by the high frequency of proto-splice sites at these shared positions; proto-splice sites are proposed to be sites for intron insertion. Although we could not definitively disprove that spliceosomal introns were already present in the last universal common ancestor, our results lend more support to the idea that introns were gained late. At least, our results show that MRP genes were intronless at the time of endosymbiosis. The parallel intron gains between CRP and MRP genes accounted for 2.3% of total intron positions, which should provide a reliable estimate for future inferences of intron evolution.

  10. A commutation strategy for IGBT-based CSI-fed parallel resonant

    Indian Academy of Sciences (India)

    The dynamic behaviour of the switches decides the upper frequency limit for the application. IGBTs with the series diodes behave as uni-directional current switches with bi-directional voltage blocking capability. This feature should be taken into account to decide on an appropriate switching strategy for this converter ...

  11. ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation

    International Nuclear Information System (INIS)

    Sousbie, Thierry; Colombi, Stéphane

    2016-01-01

    Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the best way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.

  12. ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation

    Energy Technology Data Exchange (ETDEWEB)

    Sousbie, Thierry, E-mail: tsousbie@gmail.com [Institut d' Astrophysique de Paris, CNRS UMR 7095 and UPMC, 98bis, bd Arago, F-75014 Paris (France); Department of Physics, The University of Tokyo, Tokyo 113-0033 (Japan); Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033 (Japan); Colombi, Stéphane, E-mail: colombi@iap.fr [Institut d' Astrophysique de Paris, CNRS UMR 7095 and UPMC, 98bis, bd Arago, F-75014 Paris (France); Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan)

    2016-09-15

    Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the best way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.

  13. Origin and evolution of chromosomal sperm proteins.

    Science.gov (United States)

    Eirín-López, José M; Ausió, Juan

    2009-10-01

    In the eukaryotic cell, DNA compaction is achieved through its interaction with histones, constituting a nucleoprotein complex called chromatin. During metazoan evolution, the different structural and functional constraints imposed on the somatic and germinal cell lines led to a unique process of specialization of the sperm nuclear basic proteins (SNBPs) associated with chromatin in male germ cells. SNBPs encompass a heterogeneous group of proteins which, since their discovery in the nineteenth century, have been studied extensively in different organisms. However, the origin and controversial mechanisms driving the evolution of this group of proteins has only recently started to be understood. Here, we analyze in detail the histone hypothesis for the vertical parallel evolution of SNBPs, involving a "vertical" transition from a histone to a protamine-like and finally protamine types (H --> PL --> P), the last one of which is present in the sperm of organisms at the uppermost tips of the phylogenetic tree. In particular, the common ancestry shared by the protamine-like (PL)- and protamine (P)-types with histone H1 is discussed within the context of the diverse structural and functional constraints acting upon these proteins during bilaterian evolution.

  14. Distinct neural and neuromuscular strategies underlie independent evolution of simplified advertisement calls.

    Science.gov (United States)

    Leininger, Elizabeth C; Kelley, Darcy B

    2013-04-07

    Independent or convergent evolution can underlie phenotypic similarity of derived behavioural characters. Determining the underlying neural and neuromuscular mechanisms sheds light on how these characters arose. One example of evolutionarily derived characters is a temporally simple advertisement call of male African clawed frogs (Xenopus) that arose at least twice independently from a more complex ancestral pattern. How did simplification occur in the vocal circuit? To distinguish shared from divergent mechanisms, we examined activity from the calling brain and vocal organ (larynx) in two species that independently evolved simplified calls. We find that each species uses distinct neural and neuromuscular strategies to produce the simplified calls. Isolated Xenopus borealis brains produce fictive vocal patterns that match temporal patterns of actual male calls; the larynx converts nerve activity faithfully into muscle contractions and single clicks. In contrast, fictive patterns from isolated Xenopus boumbaensis brains are short bursts of nerve activity; the isolated larynx requires stimulus bursts to produce a single click of sound. Thus, unlike X. borealis, the output of the X. boumbaensis hindbrain vocal pattern generator is an ancestral burst-type pattern, transformed by the larynx into single clicks. Temporally simple advertisement calls in genetically distant species of Xenopus have thus arisen independently via reconfigurations of central and peripheral vocal neuroeffectors.

  15. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  16. Evolution of symmetric reconnection layer in the presence of parallel shear flow

    Energy Technology Data Exchange (ETDEWEB)

    Lu Haoyu [Space Science Institute, School of Astronautics, Beihang University, Beijing 100191 (China); Sate Key Laboratory of Space Weather, Chinese Academy of Sciences, Beijing 100190 (China); Cao Jinbin [Space Science Institute, School of Astronautics, Beihang University, Beijing 100191 (China)

    2011-07-15

    The development of the structure of symmetric reconnection layer in the presence of a shear flow parallel to the antiparallel magnetic field component is studied by using a set of one-dimensional (1D) magnetohydrodynamic (MHD) equations. The Riemann problem is simulated through a second-order conservative TVD (total variation diminishing) scheme, in conjunction with Roe's averages for the Riemann problem. The simulation results indicate that besides the MHD shocks and expansion waves, there exist some new small-scale structures in the reconnection layer. For the case of zero initial guide magnetic field (i.e., B{sub y0} = 0), a pair of intermediate shock and slow shock (SS) is formed in the presence of the parallel shear flow. The critical velocity of initial shear flow V{sub zc} is just the Alfven velocity in the inflow region. As V{sub z{infinity}} increases to the value larger than V{sub zc}, a new slow expansion wave appears in the position of SS in the case V{sub z{infinity}} < V{sub zc}, and one of the current densities drops to zero. As plasma {beta} increases, the out-flow region is widened. For B{sub y0} {ne} 0, a pair of SSs and an additional pair of time-dependent intermediate shocks (TDISs) are found to be present. Similar to the case of B{sub y0} = 0, there exists a critical velocity of initial shear flow V{sub zc}. The value of V{sub zc} is, however, smaller than the Alfven velocity of the inflow region. As plasma {beta} increases, the velocities of SS and TDIS increase, and the out-flow region is widened. However, the velocity of downstream SS increases even faster, making the distance between SS and TDIS smaller. Consequently, the interaction between SS and TDIS in the case of high plasma {beta} influences the property of direction rotation of magnetic field across TDIS. Thereby, a wedge in the hodogram of tangential magnetic field comes into being. When {beta}{yields}{infinity}, TDISs disappear and the guide magnetic field becomes constant.

  17. Parallel Solution of Robust Nonlinear Model Predictive Control Problems in Batch Crystallization

    Directory of Open Access Journals (Sweden)

    Yankai Cao

    2016-06-01

    Full Text Available Representing the uncertainties with a set of scenarios, the optimization problem resulting from a robust nonlinear model predictive control (NMPC strategy at each sampling instance can be viewed as a large-scale stochastic program. This paper solves these optimization problems using the parallel Schur complement method developed to solve stochastic programs on distributed and shared memory machines. The control strategy is illustrated with a case study of a multidimensional unseeded batch crystallization process. For this application, a robust NMPC based on min–max optimization guarantees satisfaction of all state and input constraints for a set of uncertainty realizations, and also provides better robust performance compared with open-loop optimal control, nominal NMPC, and robust NMPC minimizing the expected performance at each sampling instance. The performance of robust NMPC can be improved by generating optimization scenarios using Bayesian inference. With the efficient parallel solver, the solution time of one optimization problem is reduced from 6.7 min to 0.5 min, allowing for real-time application.

  18. Limited angle tomographic breast imaging: A comparison of parallel beam and pinhole collimation

    International Nuclear Information System (INIS)

    Wessell, D.E.; Kadrmas, D.J.; Frey, E.C.

    1996-01-01

    Results from clinical trials have suggested no improvement in lesion detection with parallel hole SPECT scintimammography (SM) with Tc-99m over parallel hole planar SM. In this initial investigation, we have elucidated some of the unique requirements of SPECT SM. With these requirements in mind, we have begun to develop practical data acquisition and reconstruction strategies that can reduce image artifacts and improve image quality. In this paper we investigate limited angle orbits for both parallel hole and pinhole SPECT SM. Singular Value Decomposition (SVD) is used to analyze the artifacts associated with the limited angle orbits. Maximum likelihood expectation maximization (MLEM) reconstructions are then used to examine the effects of attenuation compensation on the quality of the reconstructed image. All simulations are performed using the 3D-MCAT breast phantom. The results of these simulation studies demonstrate that limited angle SPECT SM is feasible, that attenuation correction is needed for accurate reconstructions, and that pinhole SPECT SM may have an advantage over parallel hole SPECT SM in terms of improved image quality and reduced image artifacts

  19. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  20. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  1. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  2. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  3. From Massively Parallel Algorithms and Fluctuating Time Horizons to Nonequilibrium Surface Growth

    International Nuclear Information System (INIS)

    Korniss, G.; Toroczkai, Z.; Novotny, M. A.; Rikvold, P. A.

    2000-01-01

    We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a nonequilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable. (c) 2000 The American Physical Society

  4. Chattering-Free Neuro-Sliding Mode Control of 2-DOF Planar Parallel Manipulators

    Directory of Open Access Journals (Sweden)

    Tien Dung Le

    2013-01-01

    Full Text Available This paper proposes a novel chattering free neuro-sliding mode controller for the trajectory tracking control of two degrees of freedom (DOF parallel manipulators which have a complicated dynamic model, including modelling uncertainties, frictional uncertainties and external disturbances. A feedforward neural network (NN is combined with an error estimator to completely compensate the large nonlinear uncertainties and external disturbances of the parallel manipulators. The online weight tuning algorithms of the NN and the structure of the error estimator are derived with the strict theoretical stability proof of the Lyapunov theorem. The upper bound of uncertainties and the upper bound of the approximation errors are not required to be known in advance in order to guarantee the stability of the closed-loop system. The example simulation results show the effectiveness of the proposed control strategy for the tracking control of a 2-DOF parallel manipulator. It results in its being chattering-free, very small tracking errors and its robustness against uncertainties and external disturbances.

  5. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Two-fluid and parallel compressibility effects in tokamak plasmas

    International Nuclear Information System (INIS)

    Sugiyama, L.E.; Park, W.

    1998-01-01

    The MHD, or single fluid, model for a plasma has long been known to provide a surprisingly good description of much of the observed nonlinear dynamics of confined plasmas, considering its simple nature compared to the complexity of the real system. On the other hand, some of the supposed agreement arises from the lack of the detailed measurements that are needed to distinguish MHD from more sophisticated models that incorporate slower time scale processes. At present, a number of factors combine to make models beyond MHD of practical interest. Computational considerations still favor fluid rather than particle models for description of the full plasma, and suggest an approach that starts from a set of fluid-like equations that extends MHD to slower time scales and more accurate parallel dynamics. This paper summarizes a set of two-fluid equations for toroidal (tokamak) geometry that has been developed and tested as the MH3D-T code [1] and some results from the model. The electrons and ions are described as separate fluids. The code and its original MHD version, MH3D [2], are the first numerical, initial value models in toroidal geometry that include the full 3D (fluid) compressibility and electromagnetic effects. Previous nonlinear MHD codes for toroidal geometry have, in practice, neglected the plasma density evolution, on the grounds that MHD plasmas are only weakly compressible and that the background density variation is weaker than the temperature variation. Analytically, the common use of toroidal plasma models based on aspect ratio expansion, such as reduced MHD, has reinforced this impression, since this ordering reduces plasma compressibility effects. For two-fluid plasmas, the density evolution cannot be neglected in principle, since it provides the basic driving energy for the diamagnetic drifts of the electrons and ions perpendicular to the magnetic field. It also strongly influences the parallel dynamics, in combination with the parallel thermal

  7. A Scalable Parallel PWTD-Accelerated SIE Solver for Analyzing Transient Scattering from Electrically Large Objects

    KAUST Repository

    Liu, Yang; Yucel, Abdulkadir; Bagci, Hakan; Michielssen, Eric

    2015-01-01

    of processors by leveraging two mechanisms: (i) a hierarchical parallelization strategy to evenly distribute the computation and memory loads at all levels of the PWTD tree among processors, and (ii) a novel asynchronous communication scheme to reduce the cost

  8. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the / measured ...

  9. Parallel processes: using motivational interviewing as an implementation coaching strategy.

    Science.gov (United States)

    Hettema, Jennifer E; Ernst, Denise; Williams, Jessica Roberts; Miller, Kristin J

    2014-07-01

    In addition to its clinical efficacy as a communication style for strengthening motivation and commitment to change, motivational interviewing (MI) has been hypothesized to be a potential tool for facilitating evidence-based practice adoption decisions. This paper reports on the rationale and content of MI-based implementation coaching Webinars that, as part of a larger active dissemination strategy, were found to be more effective than passive dissemination strategies at promoting adoption decisions among behavioral health and health providers and administrators. The Motivational Interviewing Treatment Integrity scale (MITI 3.1.1) was used to rate coaching Webinars from 17 community behavioral health organizations and 17 community health centers. The MITI coding system was found to be applicable to the coaching Webinars, and raters achieved high levels of agreement on global and behavior count measurements of fidelity to MI. Results revealed that implementation coaches maintained fidelity to the MI model, exceeding competency benchmarks for almost all measures. Findings suggest that it is feasible to implement MI as a coaching tool.

  10. Parallel arrangements of positive feedback loops limit cell-to-cell variability in differentiation.

    Science.gov (United States)

    Dey, Anupam; Barik, Debashis

    2017-01-01

    Cellular differentiations are often regulated by bistable switches resulting from specific arrangements of multiple positive feedback loops (PFL) fused to one another. Although bistability generates digital responses at the cellular level, stochasticity in chemical reactions causes population heterogeneity in terms of its differentiated states. We hypothesized that the specific arrangements of PFLs may have evolved to minimize the cellular heterogeneity in differentiation. In order to test this we investigated variability in cellular differentiation controlled either by parallel or serial arrangements of multiple PFLs having similar average properties under extrinsic and intrinsic noises. We find that motifs with PFLs fused in parallel to one another around a central regulator are less susceptible to noise as compared to the motifs with PFLs arranged serially. Our calculations suggest that the increased resistance to noise in parallel motifs originate from the less sensitivity of bifurcation points to the extrinsic noise. Whereas estimation of mean residence times indicate that stable branches of bifurcations are robust to intrinsic noise in parallel motifs as compared to serial motifs. Model conclusions are consistent both in AND- and OR-gate input signal configurations and also with two different modeling strategies. Our investigations provide some insight into recent findings that differentiation of preadipocyte to mature adipocyte is controlled by network of parallel PFLs.

  11. Governance of sustainable development: co-evolution of corporate and political strategies

    International Nuclear Information System (INIS)

    Bleischwitz, R.; College of Europe, Bruges

    2004-01-01

    This article proposes a policy framework for analysing corporate governance toward sustainable development. The aim is to set up a framework for analysing market evolution toward sustainability. In the first section, the paper briefly refers to recent theories about both market and government failures that express scepticism about the way that framework conditions for market actors are set. For this reason, multi-layered governance structures seem advantageous if new solutions are to be developed in policy areas concerned with long-term change and stepwise internalisation of externalities. The paper introduces the principle of regulated self-regulation. With regard to corporate actors' interests, it presents recent insights from theories about the knowledge-based firm, where the creation of new knowledge is based on the absorption of societal views. The result is greater scope for the endogenous internalisation of externalities, which leads to a variety of new and different corporate strategies. Because governance has to set incentives for quite a diverse set of actors in their daily operations, the paper finally discusses innovation-inducing regulation. In both areas, regulated self-regulation and innovation-inducing regulation, corporate and political governance co-evolve. The paper concludes that these co-evolutionary mechanisms may assume some of the stabilising and orientating functions previously exercised by framing activities of the state. In such a view, the government's main function is to facilitate learning processes, thus departing from the state's function as known from welfare economics. (author)

  12. Historical Evolution of Spatial Abilities

    Directory of Open Access Journals (Sweden)

    A. Ardila

    1993-01-01

    Full Text Available Historical evolution and cross-cultural differences in spatial abilities are analyzed. Spatial abilities have been found to be significantly associated with the complexity of geographical conditions and survival demands. Although impaired spatial cognition is found in cases of, exclusively or predominantly, right hemisphere pathology, it is proposed that this asymmetry may depend on the degree of training in spatial abilities. It is further proposed that spatial cognition might have evolved in a parallel way with cultural evolution and environmental demands. Contemporary city humans might be using spatial abilities in some new, conceptual tasks that did not exist in prehistoric times: mathematics, reading, writing, mechanics, music, etc. Cross-cultural analysis of spatial abilities in different human groups, normalization of neuropsychological testing instruments, and clinical observations of spatial ability disturbances in people with different cultural backgrounds and various spatial requirements, are required to construct a neuropsychological theory of brain organization of spatial cognition.

  13. Parallel Evolution under Chemotherapy Pressure in 29 Breast Cancer Cell Lines Results in Dissimilar Mechanisms of Resistance

    DEFF Research Database (Denmark)

    Tegze, Balint; Szallasi, Zoltan Imre; Haltrich, Iren

    2012-01-01

    Background: Developing chemotherapy resistant cell lines can help to identify markers of resistance. Instead of using a panel of highly heterogeneous cell lines, we assumed that truly robust and convergent pattern of resistance can be identified in multiple parallel engineered derivatives of only...

  14. The evolution of cooperation in the Prisoner's Dilemma and the Snowdrift game based on Particle Swarm Optimization

    Science.gov (United States)

    Wang, Xianjia; Lv, Shaojie; Quan, Ji

    2017-09-01

    This paper studies the evolution of cooperation in the Prisoner's Dilemma (PD) and the Snowdrift (SD) game on a square lattice. Each player interacting with their neighbors can adopt mixed strategies describing an individual's propensity to cooperate. Particle Swarm Optimization (PSO) is introduced into strategy update rules to investigate the evolution of cooperation. In the evolutionary game, each player updates its strategy according to the best strategy in all its past actions and the currently best strategy of its neighbors. The simulation results show that the PSO mechanism for strategy updating can promote the evolution of cooperation and sustain cooperation even under unfavorable conditions in both games. However, the spatial structure plays different roles in these two social dilemmas, which presents different characteristics of macroscopic cooperation pattern. Our research provides insights into the evolution of cooperation in both the Prisoner's Dilemma and the Snowdrift game and maybe helpful in understanding the ubiquity of cooperation in natural and social systems.

  15. Origins, evolution, and diversification of cleptoparasitic lineages in long-tongued bees.

    Science.gov (United States)

    Litman, Jessica R; Praz, Christophe J; Danforth, Bryan N; Griswold, Terry L; Cardinal, Sophie

    2013-10-01

    The evolution of parasitic behavior may catalyze the exploitation of new ecological niches yet also binds the fate of a parasite to that of its host. It is thus not clear whether evolutionary transitions from free-living organism to parasite lead to increased or decreased rates of diversification. We explore the evolution of brood parasitism in long-tongued bees and find decreased rates of diversification in eight of 10 brood parasitic clades. We propose a pathway for the evolution of brood parasitic strategy and find that a strategy in which a closed host nest cell is parasitized and the host offspring is killed by the adult parasite represents an obligate first step in the appearance of a brood parasitic lineage; this ultimately evolves into a strategy in which an open host cell is parasitized and the host offspring is killed by a specialized larval instar. The transition to parasitizing open nest cells expanded the range of potential hosts for brood parasitic bees and played a fundamental role in the patterns of diversification seen in brood parasitic clades. We address the prevalence of brood parasitic lineages in certain families of bees and examine the evolution of brood parasitism in other groups of organisms. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  16. Parallel implementation of the PHOENIX generalized stellar atmosphere program. II. Wavelength parallelization

    International Nuclear Information System (INIS)

    Baron, E.; Hauschildt, Peter H.

    1998-01-01

    We describe an important addition to the parallel implementation of our generalized nonlocal thermodynamic equilibrium (NLTE) stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is, distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition, task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000 - 300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard message passing interface (MPI) library calls and is fully portable between serial and parallel computers. copyright 1998 The American Astronomical Society

  17. A parallel sweeping preconditioner for frequency-domain seismic wave propagation

    KAUST Repository

    Poulson, Jack

    2012-09-01

    We present a parallel implementation of Engquist and Ying\\'s sweeping preconditioner, which exploits radiation boundary conditions in order to form an approximate block LDLT factorization of the Helmholtz operator with only O(N4/3) work and an application (and memory) cost of only O(N logN). The approximate factorization is then used as a preconditioner for GMRES, and we show that essentially O(1) iterations are required for convergence, even for the full SEG/EAGE over-thrust model at 30 Hz. In particular, we demonstrate the solution of said problem in a mere 15 minutes on 8192 cores of TACC\\'s Lonestar, which may be the largest-scale 3D heterogeneous Helmholtz calculation to date. Generalizations of our parallel strategy are also briefly discussed for time-harmonic linear elasticity and Maxwell\\'s equations.

  18. Parallel Evolution of Genes and Languages in the Caucasus Region

    Science.gov (United States)

    Balanovsky, Oleg; Dibirova, Khadizhat; Dybo, Anna; Mudrak, Oleg; Frolova, Svetlana; Pocheshkhova, Elvira; Haber, Marc; Platt, Daniel; Schurr, Theodore; Haak, Wolfgang; Kuznetsova, Marina; Radzhabov, Magomed; Balaganskaya, Olga; Romanov, Alexey; Zakharova, Tatiana; Soria Hernanz, David F.; Zalloua, Pierre; Koshel, Sergey; Ruhlen, Merritt; Renfrew, Colin; Wells, R. Spencer; Tyler-Smith, Chris; Balanovska, Elena

    2012-01-01

    We analyzed 40 SNP and 19 STR Y-chromosomal markers in a large sample of 1,525 indigenous individuals from 14 populations in the Caucasus and 254 additional individuals representing potential source populations. We also employed a lexicostatistical approach to reconstruct the history of the languages of the North Caucasian family spoken by the Caucasus populations. We found a different major haplogroup to be prevalent in each of four sets of populations that occupy distinct geographic regions and belong to different linguistic branches. The haplogroup frequencies correlated with geography and, even more strongly, with language. Within haplogroups, a number of haplotype clusters were shown to be specific to individual populations and languages. The data suggested a direct origin of Caucasus male lineages from the Near East, followed by high levels of isolation, differentiation and genetic drift in situ. Comparison of genetic and linguistic reconstructions covering the last few millennia showed striking correspondences between the topology and dates of the respective gene and language trees, and with documented historical events. Overall, in the Caucasus region, unmatched levels of gene-language co-evolution occurred within geographically isolated populations, probably due to its mountainous terrain. PMID:21571925

  19. Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    OpenAIRE

    Vanderbauwhede, Wim; Davidson, Gavin

    2017-01-01

    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...

  20. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  1. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  2. The evolution of competitive settlement strategies in Fijian prehistory : results of excavations and radiometric dating

    International Nuclear Information System (INIS)

    Field, J.S.

    2003-01-01

    A series of excavations were completed between June 2001 and March 2002 in the Fiji Islands. The goal of this research was to investigate the evolution of competitive settlement strategies in Fijian prehistory from an archaeological and evolutionary ecological perspective. Twelve sites were excavated and mapped in the Sigatoka Valley, located in the southwestern corner of the main island of Viti Levu. Excavations were focused on determining the chronology of fortifications in the region, and the collected samples were compared to expectations based on GIS-based analyses of land productivity and historical documents pertaining to late-period warfare. Over four hundred archaeological sites have been identified in the Sigatoka Valley, and of these roughly one-third are purely defensive in configuration, with no immediate access to water or arable land. The Waikato Archaeological Dating Fund provided four radiometric dates for three defensive sites, and one site associated with a production area. (author). 6 refs., 1 fig

  3. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  4. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  5. Parallel implementation of a dynamic unstructured chimera method in the DLR finite volume TAU-code

    Energy Technology Data Exchange (ETDEWEB)

    Madrane, A.; Raichle, A.; Stuermer, A. [German Aerospace Center, DLR, Numerical Methods, Inst. of Aerodynamics and Flow Technology, Braunschweig (Germany)]. E-mail: aziz.madrane@dlr.de

    2004-07-01

    Aerodynamic problems involving moving geometries have many applications, including store separation, high-speed train entering into a tunnel, simulation of full configurations of the helicopter and fast maneuverability. Overset grid method offers the option of calculating these procedures. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping unstructured grids that update and exchange boundary information through interpolation. However, such computations are complicated and time consuming. Parallel computing offers a very effective way to improve the productivity in doing computational fluid dynamics (CFD). Therefore the purpose of this study is to develop an efficient parallel computation algorithm for analyzing the flowfield of complex geometries using overset grids method. The strategy adopted in the parallelization of the overset grids method including the use of data structures and communication, is described. Numerical results are presented to demonstrate the efficiency of the resulting parallel overset grids method. (author)

  6. Parallel implementation of a dynamic unstructured chimera method in the DLR finite volume TAU-code

    International Nuclear Information System (INIS)

    Madrane, A.; Raichle, A.; Stuermer, A.

    2004-01-01

    Aerodynamic problems involving moving geometries have many applications, including store separation, high-speed train entering into a tunnel, simulation of full configurations of the helicopter and fast maneuverability. Overset grid method offers the option of calculating these procedures. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping unstructured grids that update and exchange boundary information through interpolation. However, such computations are complicated and time consuming. Parallel computing offers a very effective way to improve the productivity in doing computational fluid dynamics (CFD). Therefore the purpose of this study is to develop an efficient parallel computation algorithm for analyzing the flowfield of complex geometries using overset grids method. The strategy adopted in the parallelization of the overset grids method including the use of data structures and communication, is described. Numerical results are presented to demonstrate the efficiency of the resulting parallel overset grids method. (author)

  7. An expert system for automatic mesh generation for Sn particle transport simulation in parallel environment

    International Nuclear Information System (INIS)

    Apisit, Patchimpattapong; Alireza, Haghighat; Shedlock, D.

    2003-01-01

    An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)

  8. An expert system for automatic mesh generation for Sn particle transport simulation in parallel environment

    Energy Technology Data Exchange (ETDEWEB)

    Apisit, Patchimpattapong [Electricity Generating Authority of Thailand, Office of Corporate Planning, Bangkruai, Nonthaburi (Thailand); Alireza, Haghighat; Shedlock, D. [Florida Univ., Department of Nuclear and Radiological Engineering, Gainesville, FL (United States)

    2003-07-01

    An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)

  9. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  10. An Introduction to Parallelism, Concurrency and Acceleration (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Concurrency and parallelism are firm elements of any modern computing infrastructure, made even more prominent by the emergence of accelerators. These lectures offer an introduction to these important concepts. We will begin with a brief refresher of recent hardware offerings to modern-day programmers. We will then open the main discussion with an overview of the laws and practical aspects of scalability. Key parallelism data structures, patterns and algorithms will be shown. The main threats to scalability and mitigation strategies will be discussed in the context of real-life optimization problems. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Current...

  11. Kinematic analysis of parallel manipulators by algebraic screw theory

    CERN Document Server

    Gallardo-Alvarado, Jaime

    2016-01-01

    This book reviews the fundamentals of screw theory concerned with velocity analysis of rigid-bodies, confirmed with detailed and explicit proofs. The author additionally investigates acceleration, jerk, and hyper-jerk analyses of rigid-bodies following the trend of the velocity analysis. With the material provided in this book, readers can extend the theory of screws into the kinematics of optional order of rigid-bodies. Illustrative examples and exercises to reinforce learning are provided. Of particular note, the kinematics of emblematic parallel manipulators, such as the Delta robot as well as the original Gough and Stewart platforms are revisited applying, in addition to the theory of screws, new methods devoted to simplify the corresponding forward-displacement analysis, a challenging task for most parallel manipulators. Stands as the only book devoted to the acceleration, jerk and hyper-jerk (snap) analyses of rigid-body by means of screw theory; Provides new strategies to simplify the forward kinematic...

  12. Influence of Paralleling Dies and Paralleling Half-Bridges on Transient Current Distribution in Multichip Power Modules

    DEFF Research Database (Denmark)

    Li, Helong; Zhou, Wei; Wang, Xiongfei

    2018-01-01

    This paper addresses the transient current distribution in the multichip half-bridge power modules, where two types of paralleling connections with different current commutation mechanisms are considered: paralleling dies and paralleling half-bridges. It reveals that with paralleling dies, both t...

  13. CMS multicore scheduling strategy

    International Nuclear Information System (INIS)

    Yzquierdo, Antonio Pérez-Calero; Hernández, Jose; Holzman, Burt; Majewski, Krista; McCrea, Alison

    2014-01-01

    In the next years, processor architectures based on much larger numbers of cores will be most likely the model to continue 'Moore's Law' style throughput gains. This not only results in many more jobs in parallel running the LHC Run 1 era monolithic applications, but also the memory requirements of these processes push the workernode architectures to the limit. One solution is parallelizing the application itself, through forking and memory sharing or through threaded frameworks. CMS is following all of these approaches and has a comprehensive strategy to schedule multicore jobs on the GRID based on the glideinWMS submission infrastructure. The main component of the scheduling strategy, a pilot-based model with dynamic partitioning of resources that allows the transition to multicore or whole-node scheduling without disallowing the use of single-core jobs, is described. This contribution also presents the experiences made with the proposed multicore scheduling schema and gives an outlook of further developments working towards the restart of the LHC in 2015.

  14. Novel Random Mutagenesis Method for Directed Evolution.

    Science.gov (United States)

    Feng, Hong; Wang, Hai-Yan; Zhao, Hong-Yan

    2017-01-01

    Directed evolution is a powerful strategy for gene mutagenesis, and has been used for protein engineering both in scientific research and in the biotechnology industry. The routine method for directed evolution was developed by Stemmer in 1994 (Stemmer, Proc Natl Acad Sci USA 91, 10747-10751, 1994; Stemmer, Nature 370, 389-391, 1994). Since then, various methods have been introduced, each of which has advantages and limitations depending upon the targeted genes and procedure. In this chapter, a novel alternative directed evolution method which combines mutagenesis PCR with dITP and fragmentation by endonuclease V is described. The kanamycin resistance gene is used as a reporter gene to verify the novel method for directed evolution. This method for directed evolution has been demonstrated to be efficient, reproducible, and easy to manipulate in practice.

  15. Stochastic Optimal Control of Parallel Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Feiyan Qin

    2017-02-01

    Full Text Available Energy management strategies (EMSs in hybrid electric vehicles (HEVs are highly related to the fuel economy and emission performances. However, EMS constitutes a challenging problem due to the complex structure of a HEV and the unknown or partially known driving cycles. To meet this problem, this paper adopts a stochastic dynamic programming (SDP method for the EMS of a specially designed vehicle, a pre-transmission single-shaft torque-coupling parallel HEV. In this parallel HEV, the auto clutch output is connected to the transmission input through an electric motor, which benefits an efficient motor assist operation. In this EMS, demanded torque of driver is modeled as a one-state Markov process to represent the uncertainty of future driving situations. The obtained EMS has been evaluated with ADVISOR2002 over two standard government drive cycles and a self-defined one, and compared with a dynamic programming (DP one and a rule-based one. Simulation results have shown the real-time performance of the proposed approach, and potential vehicle performance improvement relative to the rule-based one.

  16. Open-circuit fault detection and tolerant operation for a parallel-connected SAB DC-DC converter

    DEFF Research Database (Denmark)

    Park, Kiwoo; Chen, Zhe

    2014-01-01

    This paper presents an open-circuit fault detection method and its tolerant control strategy for a Parallel-Connected Single Active Bridge (PCSAB) dc-dc converter. The structural and operational characteristics of the PCSAB converter lead to several advantages especially for high power applicatio...

  17. Time evolution of tokamak states with flow

    International Nuclear Information System (INIS)

    Kerner, W.; Weitzner, H.

    1985-12-01

    The general dissipative Braginskii single-fluid model is applied to simulate tokamak transport. An expansion with respect to epsilon = (ω/sub i/tau/sub i/) -1 , the factor by which perpendicular and parallel transport coefficients differ, yields a numerically tractable scheme. The resulting 1-1/2 D procedure requires computation of 2D toroidal equilibria with flow together with the solution of a system of ordinary 1D flux-averaged equations for the time evolution of the profiles. 13 refs

  18. Adaptive evolution of cooperation through Darwinian dynamics in Public Goods games.

    Science.gov (United States)

    Deng, Kuiying; Chu, Tianguang

    2011-01-01

    The linear or threshold Public Goods game (PGG) is extensively accepted as a paradigmatic model to approach the evolution of cooperation in social dilemmas. Here we explore the significant effect of nonlinearity of the structures of public goods on the evolution of cooperation within the well-mixed population by adopting Darwinian dynamics, which simultaneously consider the evolution of populations and strategies on a continuous adaptive landscape, and extend the concept of evolutionarily stable strategy (ESS) as a coalition of strategies that is both convergent-stable and resistant to invasion. Results show (i) that in the linear PGG contributing nothing is an ESS, which contradicts experimental data, (ii) that in the threshold PGG contributing the threshold value is a fragile ESS, which cannot resist the invasion of contributing nothing, and (iii) that there exists a robust ESS of contributing more than half in the sigmoid PGG if the return rate is relatively high. This work reveals the significant effect of the nonlinearity of the structures of public goods on the evolution of cooperation, and suggests that, compared with the linear or threshold PGG, the sigmoid PGG might be a more proper model for the evolution of cooperation within the well-mixed population.

  19. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  20. Parallel and series 4 switch Z-source converters in induction motor drives

    DEFF Research Database (Denmark)

    Baba, Mircea; Lascu, Cristian; Boldea, Ion

    2014-01-01

    This paper presents a control strategy for four switch three-phase Z-source Inverter with parallel and series Z-source network fed 0.5 kW induction motor drive with V/f control and the algorithm to control the dc boost, split capacitor voltage balance and the ac output voltage. The proposed control...... algorithm is validated through simulation and experiment....

  1. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  2. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-29

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.

  3. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  4. Evolution Engines and Artificial Intelligence

    Science.gov (United States)

    Hemker, Andreas; Becks, Karl-Heinz

    In the last years artificial intelligence has achieved great successes, mainly in the field of expert systems and neural networks. Nevertheless the road to truly intelligent systems is still obscured. Artificial intelligence systems with a broad range of cognitive abilities are not within sight. The limited competence of such systems (brittleness) is identified as a consequence of the top-down design process. The evolution principle of nature on the other hand shows an alternative and elegant way to build intelligent systems. We propose to take an evolution engine as the driving force for the bottom-up development of knowledge bases and for the optimization of the problem-solving process. A novel data analysis system for the high energy physics experiment DELPHI at CERN shows the practical relevance of this idea. The system is able to reconstruct the physical processes after the collision of particles by making use of the underlying standard model of elementary particle physics. The evolution engine acts as a global controller of a population of inference engines working on the reconstruction task. By implementing the system on the Connection Machine (Model CM-2) we use the full advantage of the inherent parallelization potential of the evolutionary approach.

  5. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  6. 3D Hyperpolarized C-13 EPI with Calibrationless Parallel Imaging

    DEFF Research Database (Denmark)

    Gordon, Jeremy W.; Hansen, Rie Beck; Shin, Peter J.

    2018-01-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and tem...... strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism....

  7. Automated Long-Term Monitoring of Parallel Microfluidic Operations Applying a Machine Vision-Assisted Positioning Method

    Science.gov (United States)

    Yip, Hon Ming; Li, John C. S.; Cui, Xin; Gao, Qiannan; Leung, Chi Chiu

    2014-01-01

    As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities. PMID:25133248

  8. Automated long-term monitoring of parallel microfluidic operations applying a machine vision-assisted positioning method.

    Science.gov (United States)

    Yip, Hon Ming; Li, John C S; Xie, Kai; Cui, Xin; Prasad, Agrim; Gao, Qiannan; Leung, Chi Chiu; Lam, Raymond H W

    2014-01-01

    As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities.

  9. Automated Long-Term Monitoring of Parallel Microfluidic Operations Applying a Machine Vision-Assisted Positioning Method

    Directory of Open Access Journals (Sweden)

    Hon Ming Yip

    2014-01-01

    Full Text Available As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities.

  10. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  11. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  12. Parallel efficient rate control methods for JPEG 2000

    Science.gov (United States)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  13. A review of snapshot multidimensional optical imaging: Measuring photon tags in parallel

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Liang, E-mail: gaol@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois at Urbana–Champaign, 306 N. Wright St., Urbana, IL 61801 (United States); Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana–Champaign, 405 North Mathews Avenue, Urbana, IL 61801 (United States); Wang, Lihong V., E-mail: lhwang@wustl.edu [Optical imaging laboratory, Department of Biomedical Engineering, Washington University in St. Louis, One Brookings Dr., MO, 63130 (United States)

    2016-02-29

    Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition–also dubbed snapshot imaging–has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications.

  14. Sexual imprinting: what strategies should we expect to see in nature?

    Science.gov (United States)

    Chaffee, Dalton W; Griffin, Hayes; Gilman, R Tucker

    2013-12-01

    Sexual imprinting occurs when juveniles learn mate preferences by observing the phenotypes of other members of their populations, and it is ubiquitous in nature. Imprinting strategies, that is which individuals and phenotypes are observed and how strong preferences become, vary among species. Imprinting can affect trait evolution and the probability of speciation, and different imprinting strategies are expected to have different effects. However, little is known about how and why different imprinting strategies evolve, or which strategies we should expect to see in nature. We used a mathematical model to study how the evolution of sexual imprinting depends on (1) imprinting costs and (2) the sex-specific fitness effects of the phenotype on which individuals imprint. We found that even small fixed costs prevent the evolution of sexual imprinting, but small relative costs do not. When imprinting does evolve, we identified the conditions under which females should evolve to imprint on their fathers, their mothers, or on other members of their populations. Our results provide testable hypotheses for empirical work and help to explain the conditions under which sexual imprinting might evolve to promote speciation. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  15. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Parallelizing Gene Expression Programming Algorithm in Enabling Large-Scale Classification

    Directory of Open Access Journals (Sweden)

    Lixiong Xu

    2017-01-01

    Full Text Available As one of the most effective function mining algorithms, Gene Expression Programming (GEP algorithm has been widely used in classification, pattern recognition, prediction, and other research fields. Based on the self-evolution, GEP is able to mine an optimal function for dealing with further complicated tasks. However, in big data researches, GEP encounters low efficiency issue due to its long time mining processes. To improve the efficiency of GEP in big data researches especially for processing large-scale classification tasks, this paper presents a parallelized GEP algorithm using MapReduce computing model. The experimental results show that the presented algorithm is scalable and efficient for processing large-scale classification tasks.

  17. Parallel Mitogenome Sequencing Alleviates Random Rooting Effect in Phylogeography.

    Science.gov (United States)

    Hirase, Shotaro; Takeshima, Hirohiko; Nishida, Mutsumi; Iwasaki, Wataru

    2016-04-28

    Reliably rooted phylogenetic trees play irreplaceable roles in clarifying diversification in the patterns of species and populations. However, such trees are often unavailable in phylogeographic studies, particularly when the focus is on rapidly expanded populations that exhibit star-like trees. A fundamental bottleneck is known as the random rooting effect, where a distant outgroup tends to root an unrooted tree "randomly." We investigated whether parallel mitochondrial genome (mitogenome) sequencing alleviates this effect in phylogeography using a case study on the Sea of Japan lineage of the intertidal goby Chaenogobius annularis Eighty-three C. annularis individuals were collected and their mitogenomes were determined by high-throughput and low-cost parallel sequencing. Phylogenetic analysis of these mitogenome sequences was conducted to root the Sea of Japan lineage, which has a star-like phylogeny and had not been reliably rooted. The topologies of the bootstrap trees were investigated to determine whether the use of mitogenomes alleviated the random rooting effect. The mitogenome data successfully rooted the Sea of Japan lineage by alleviating the effect, which hindered phylogenetic analysis that used specific gene sequences. The reliable rooting of the lineage led to the discovery of a novel, northern lineage that expanded during an interglacial period with high bootstrap support. Furthermore, the finding of this lineage suggested the existence of additional glacial refugia and provided a new recent calibration point that revised the divergence time estimation between the Sea of Japan and Pacific Ocean lineages. This study illustrates the effectiveness of parallel mitogenome sequencing for solving the random rooting problem in phylogeographic studies. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  18. Feynman’s clock, a new variational principle, and parallel-in-time quantum dynamics

    Science.gov (United States)

    McClean, Jarrod R.; Parkhill, John A.; Aspuru-Guzik, Alán

    2013-01-01

    We introduce a discrete-time variational principle inspired by the quantum clock originally proposed by Feynman and use it to write down quantum evolution as a ground-state eigenvalue problem. The construction allows one to apply ground-state quantum many-body theory to quantum dynamics, extending the reach of many highly developed tools from this fertile research area. Moreover, this formalism naturally leads to an algorithm to parallelize quantum simulation over time. We draw an explicit connection between previously known time-dependent variational principles and the time-embedded variational principle presented. Sample calculations are presented, applying the idea to a hydrogen molecule and the spin degrees of freedom of a model inorganic compound, demonstrating the parallel speedup of our method as well as its flexibility in applying ground-state methodologies. Finally, we take advantage of the unique perspective of this variational principle to examine the error of basis approximations in quantum dynamics. PMID:24062428

  19. Implementation of a Parallel Protein Structure Alignment Service on Cloud

    Directory of Open Access Journals (Sweden)

    Che-Lun Hung

    2013-01-01

    Full Text Available Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.

  20. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)